VMWare - bottleneck - Questions
-
The reading disks, writing disks, network itself all have bottleneck potential. As does CPU if there is any kind of transformation going on. Is the VM being used while this happens?
-
This is the server sending the VM out.
-
@scottalanmiller said:
What is the speed that you are getting? What speed were you expecting?
Considering I have dual 1 Gb NICs on each server I was expecting at least something close to 1 Gb, instead assuming I reading the graphs right, I'm getting something like 80 Mb.
-
Your disks are holding steady on 20MB/s it looks like. Could you be disk bound?
-
@Dashrender said:
@scottalanmiller said:
What is the speed that you are getting? What speed were you expecting?
Considering I have dual 1 Gb NICs on each server I was expecting at least something close to 1 Gb, instead assuming I reading the graphs right, I'm getting something like 80 Mb.
Why do you assume that the NIC would be the bottleneck? Many shops use iSCSI over GigE for storage because they rarely are able to push enough IOPS from their arrays to saturate a GigE connection. In a streaming scenario, you might be able to, but it really depends on what the disks are doing and what they are. Keeping GigE saturated isn't trivial unless you are on SSD.
-
@scottalanmiller said:
Your disks are holding steady on 20MB/s it looks like. Could you be disk bound?
Definitely. I don't know what I should expect.
Inbound server (IBM ML3650M2) has 8 NL SATA 500 GB drives in RAID 10
Outbound server (HP DL380p G8) has 8 SAS 300 GB drives in RAID 10 -
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
What is the speed that you are getting? What speed were you expecting?
Considering I have dual 1 Gb NICs on each server I was expecting at least something close to 1 Gb, instead assuming I reading the graphs right, I'm getting something like 80 Mb.
Why do you assume that the NIC would be the bottleneck? Many shops use iSCSI over GigE for storage because they rarely are able to push enough IOPS from their arrays to saturate a GigE connection. In a streaming scenario, you might be able to, but it really depends on what the disks are doing and what they are. Keeping GigE saturated isn't trivial unless you are on SSD.
Why do I think that? Because I can download files from my file server at over 600 Mbps, but you're right I definitely need to consider the disk.
-
@Dashrender said:
Inbound server (IBM ML3650M2) has 8 NL SATA 500 GB drives in RAID 10
Outbound server (HP DL380p G8) has 8 SAS 300 GB drives in RAID 10Of the two, the writing SATA drives will be the bottleneck. RAID 10 cuts write performance in half so that is 4x SATA write speeds. Whereas the reading side is 8x SAS speeds. Don't know your spindle speeds as those could vary up to 100% on either, but assuming 7200 RPM and assuming that there is some amount of random IO and that the systems are not completely idle, the drives might be the bottleneck here. Very hard to say, but quite possible.
-
How did it go? What is the current status?
-
The copy finished overnight after I left. I came in this morning and everything started right up with no issues.
Currently I don't know of a way to see when it finished. -
OK found the logs, I started the transfer at 3:18 PM, and it finished at 4:53 AM, damn it took nearly 14 hours to transfer 360 GB.
-
That's definitely very slow.