Solved VPN File Transfer Problems
-
@dafyre said in [VPN File Transfer Problems](/post/
enabled anything at all, you'd take a performance hit.
Can you elaborate on the "anything at all"?
We've got a couple of prioritization rules in play for SIP, but everything else over the VPN link is disabled.
-
First off you have to look at the traffic. I assume fortigate has a web interface. So how much traffic goes over the IPSEC tunnel when you transfer a file? How many tunnels do you have? How much traffic goes over the WAN interface?
What's the cpu load on the fortigate when transferring files? If the hardware can't offload the IPSEC don't expect anything near what the specs say. Firewall probably has a very low spec cpu inside. Often openvpn can't be offloaded like ipsec so look at the specs for openvpn on the firewall to get a ballpark figure what you'd get on ipsec that is not hardware offloaded.
Actually looking at Fortigate 101E specs, it can do 250 megabits/s of openvpn. So expect something like that for total throughput over IPSEC if it's not hardware offloaded. Some QoS and packet inspection might cause it to not be able to hardware offload. That is one thing that can cause a severe performance hit like what @dafyre was talking about.
Second, best practice is to run the same ISP for VPN links because you want to be on the same backbone. If you don't, you should expect slower link speeds. You're not going to get 10MB/s over a 100Mbit link or 100MB/s over a 1Gbit link.
For detailed analysis you need to do packet capture to see what is happening. Just as an example I had a problem with one VPN link that turned out to be a LACP problem on the switch.
-
Anyway, like anything you have to approach it logically so you can eliminate things.
For instance just looking at the link can you run iperf or similar test between the firewalls (on the firewall itself)? Preferably with random data. In that case you can check what your actual vpn link speed is and then if that is slow you can exclude anything that has to do with servers and switches.
-
Appreciate the insights and advice, but just to be clear, my main point of concern is that I can get up to 10x the speed traversing the same VPN / ISP and network infrastructure when the server is on a 1G copper link in the data center as opposed to when the server is on a 10G fiber link in the data center. I'm fine with disparity from site to site and it's of course to be expected given different ISPs, network conditions and workloads at the different locations.
I've done some iperf based testing on the issue already and have shown that raw wan speeds are acceptable and that I can get substantially more speed on iperf than with file transfer. I've also have seen that iperf on windows is garbage, the speeds are nowhere near what I'm getting on as close as I can get to a like for like comparison with linux.
-
@Pete-S said in VPN File Transfer Problems:
Just as an example I had a problem with one VPN link that turned out to be a LACP problem on the switch.
Do you recall what the LACP issue was? It's in-play in a couple of points along the path in the data-center.
-
If your servers have Intel or Broadcomm nicks in them, you may want to test disabling VMQ.
-
@notverypunny It was some kind of configuration error on the switch. I think the server tried to negotiate LACP while the switch didn't reply as it should and thought it was some kind of loop going on. Traffic would pass but intermittently. From the outside it looked like it worked but slower. Looking closer at packet captures there was a lot of unusual packets which is the reason we started to look at the switches. After reconfiguring the port from scratch everything worked, so I don't know exactly what it was.
-
@dafyre said in VPN File Transfer Problems:
If your servers have Intel or Broadcomm nicks in them, you may want to test disabling VMQ.
I thought that issue was fixed a while ago?
-
@Dashrender said in VPN File Transfer Problems:
@dafyre said in VPN File Transfer Problems:
If your servers have Intel or Broadcomm nicks in them, you may want to test disabling VMQ.
I thought that issue was fixed a while ago?
In theory.
-
The newest piece of gear I have is a Dell R730xd (purchased last year) and we had to disable it on that one. Server 2012 R2 as the host OS. I can't remember which NIC it has off the top of my head, but we did disable VMQ on all the network adapters in that system.
-
@dafyre said in VPN File Transfer Problems:
Server 2012 R2 as the host OS.
That might be your issue right there. That's OLD.
-
@scottalanmiller said in VPN File Transfer Problems:
@dafyre said in VPN File Transfer Problems:
Server 2012 R2 as the host OS.
That might be your issue right there. That's OLD.
mutters something about dumb vendors
-
@scottalanmiller said in VPN File Transfer Problems:
@dafyre said in VPN File Transfer Problems:
Server 2012 R2 as the host OS.
That might be your issue right there. That's OLD.
I believe it was supposedly fixed in Hyper-V 2016. Possibly in a patch for Hyper-V 2012 R2, but I still disable it out of habit.
It doesn't matter unless you have 10gigabit links I believe.
-
@JaredBusch said in VPN File Transfer Problems:
@scottalanmiller said in VPN File Transfer Problems:
@dafyre said in VPN File Transfer Problems:
Server 2012 R2 as the host OS.
That might be your issue right there. That's OLD.
I believe it was supposedly fixed in Hyper-V 2016. Possibly in a patch for Hyper-V 2012 R2, but I still disable it out of habit.
It doesn't matter unless you have 10gigabit links I believe.
It was a driver problem, not an OS problem. Primarily Broadcom NICs which Dell often uses (because they cost less).
https://support.microsoft.com/en-us/help/2902166/poor-network-performance-on-virtual-machines-on-a-windows-server-2012Anyway, it doesn't make much sense to use it anyway. Should use SR-IOV instead so the VM can talk directly to the hardware without the overhead of the hypervisor. For 10G and faster NICs.
-
That's right, this was a Hyper-V issue. Though the OP hasn't said what VM platform he's using.
I assumed Windows Server 2012 R2 was just a VM.
-
I just need to comment because every time I start seeing the title of this topic, it looks like "Vile Transfer Problems" until I look directly at the title.
-
UPDATE:
Had a call with Fortigate support this AM and I'll be trying the following either later tonight or first thing tomorrow AM before anything important is happening on the network:
host-shortcut-mode {bi-directional | host-shortcut} Due to NP6 internal packet buffer limitations, some offloaded packets received at a 10Gbps interface and destined for a 1Gbps interface can be dropped, reducing performance for TCP and IP tunnel traffic. If you experience this performance reduction, you can use the following command to disable offloading sessions passing from 10Gbps interfaces to 1Gbps interfaces: config system npu set host-shortcut-mode host-shortcut end Select host-shortcut to stop offloading TCP and IP tunnel packets passing from 10Gbps interfaces to 1Gbps interfaces. TCP and IP tunnel packets passing from 1Gbps interfaces to 10Gbps interfaces are still offloaded as normal. If host-shortcut is set to the default bi-directional setting, packets in both directions are offloaded. This option is only available if your FortiGate has 10G and 1G interfaces accelerated by NP6 processors.
-
WOOT WOOT!! this seems to have fixed things.
Now if I could just get a decent ISP connection in Knoxville...
-
@notverypunny said in VPN File Transfer Problems:
WOOT WOOT!! this seems to have fixed things.
Now if I could just get a decent ISP connection in Knoxville...
Talk to @Phil-CommQuotes
-
I agree, talk to Phil! :upside-down_face:
Thanks the shout out Jared.
NVP, Message me the address and what you need and I'll work my magic.