Slow traffic over IPsec tunnel after a move but public traffic still fast
-
You have to specifically set source and destination addresses in iperf to be sure the traffic is going over the tunnel. It would be better to run iperf on hosts on both sides, rather than on the firewall itself.
iperf over the tunnel from a host to a host:
[SUM] 0.0-49.5 sec 5.50 MBytes 932 Kbits/seciperf over the tunnel from pfSense to pfSense box:
[SUM] 0.0-10.1 sec 102 MBytes 85.2 Mbits/secThat indicates something strange locally, if that iperf is really running over the tunnel. You didn't enable jumbo frames or something else weird somewhere?
Yeah if IKEv2 there is no forcing NAT-T.
iperf over the tunnel from a host to a host:
[SUM] 0.0-49.5 sec 5.50 MBytes 932 Kbits/secThat is far, far less than 6-8Mbits/sec. What are you really seeing?
-
You have to specifically set source and destination addresses in iperf to be sure the traffic is going over the tunnel. It would be better to run iperf on hosts on both sides, rather than on the firewall itself.
That indicates something strange locally, if that iperf is really running over the tunnel. You didn't enable jumbo frames or something else weird somewhere?
I think I'm tracking with you. I'm using IP addresses for iperf. EG:
iperf -c 10.75.1.80 -w 1MB -P3Yeah if IKEv2 there is no forcing NAT-T.
iperf over the tunnel from a host to a host:
[SUM] 0.0-49.5 sec 5.50 MBytes 932 Kbits/secThat is far, far less than 6-8Mbits/sec. What are you really seeing?
Yep, that's what I'm seeing over the tunnel. I may be my Ms and ms wong in my post, but the [SUM] lines are copy/paste.
That said, as much as I'd like to learn and understand what's going on, this may be short lived. I'm due to get two 1gb lines from Sonic in a week and, fingers crossed, that might fix things. At least it will be a new ISP with different peering and different routes.
-
[SUM] 0.0-49.5 sec 5.50 MBytes 932 Kbits/sec
That says that during the test it transmitted 5.50 MBytes of total data at 932 Kbits/sec
So it's actually worse than you think.
-
[SUM] 0.0-49.5 sec 5.50 MBytes 932 Kbits/sec
That says that during the test it transmitted 5.50 MBytes of total data at 932 Kbits/sec
So it's actually worse than you think.
Well, to make things even more bizarre…after making those hash changes to P2 and disabling all hardware acceleration (no reboots yet)...an hour later, I get this:
Client connecting to prima, TCP port 5001 TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte) ------------------------------------------------------------ [ 7] local 10.15.1.156 port 49770 connected with 10.75.1.20 port 5001 [ 6] local 10.15.1.156 port 49769 connected with 10.75.1.20 port 5001 [ 8] local 10.15.1.156 port 49771 connected with 10.75.1.20 port 5001 [ 9] local 10.15.1.156 port 49772 connected with 10.75.1.20 port 5001 [ ID] Interval Transfer Bandwidth [ 9] 0.0-10.0 sec 17.9 MBytes 14.9 Mbits/sec [ 7] 0.0-10.1 sec 18.0 MBytes 15.0 Mbits/sec [ 6] 0.0-10.1 sec 29.0 MBytes 24.1 Mbits/sec [ 8] 0.0-10.2 sec 9.12 MBytes 7.52 Mbits/sec [SUM] 0.0-10.2 sec 74.0 MBytes 61.0 Mbits/sec
I'm not complaining! I'll take it… But who knows what change worked...or is it just time of day, or did someone wave a rubber chicken over the right port on a switch at some Level 3 center?
-
well….drat!
Good news: got new gig WAN connection and it's glorious! :) I can easily get ~ 800 Mbits/sec to public servers ... but I'm still only getting ~20 Mbits/sec over the IPsec tunnel.I've tried with and without clamping. I'm still using AESGCM as the transport.
Any troubleshooting tips?
-
Packet capture and see what wireshark tells you.
There is obviously something misconfigured/wrong somewhere.
You need to be testing from something inside on one side to something inside on the other.
You should not be running iperf anywhere on the firewalls themselves.
-
I was troubleshooting similar issues with a sophos utm box few weeks back. Turned out I had to reduce the mtu. It was set to 1500, dropping it down to 1472 resolved* the issue. I made this change on the att gateway, and both lan/wan interfaces in utm.
I say that with a caveat as the fastest I've been able to test it with was a comcast 175 mbps down/25 up account. I too have fiber and can upload at full speed. There's 2 limitations at play. First whether or not the 1472 byte mtu is correct, and second the encryption limitation of the box utm is run on. Not to mention, the utm is virtualized under exsi 6.5 on a i5 5250 box. Still, from what i've read it should be capable of 250-300 mbps over the vpn.
-
Thanks everyone!
I'm still troubleshooting. I tried running wireshark on a remote host and then passing some traffic from a host on the other side of the tunnel. I did't see anything glaring, but then again I'm not extremely fluent in analyzing things at the packet level.I also tried playing around with changing my MTU on my WANs and MSS clamping - 1472 seems to be the largest ping I can send without fragmentation. But an MTU of 1472 vs 1500 didn't make a difference in tunnel speeds. Ditto with MSS clamping.
Tonight I'm going to try, just for giggles, recreating the tunnel from scratch. I can't see a config error, but maybe doing it fresh and new will clean out any gremlins.
I'm grateful for the tips - if you have any more troubleshooting steps, please keep 'em coming! :)
-
I also tried playing around with changing my MTU on my WANs and MSS clamping - 1472 seems to be the largest ping I can send without fragmentation.
That is what you should be able to send with a normal 1500 ethernet MTU so you do not have an MTU path problem. I would stop messing about with MSS and MTU as you are likely just wasting your time and possibly making things worse.
-
Well….I'm at a loss.
I'm now testing from hosts behind pfSense (vs between pfSense boxes themselves).
I thought I had a breakthrough when I found aes-ni disabled in Advanced but realized that was a troubleshooting tip here :)
MTU is back to defaults, no MSS clamping, using IKE2....
Both boxes also have OpenVPN tunnels to other boxes but the average load is like 1mbs.
Without the tunnel, I easily get 230-250mbs. With the tunnel (and new since my original post gig wan line) I get 30-50mbs. Xeon on one side* and SH-4860 on the other. Neither CPU spikes above 30-40%.
I tried recreating the P1 and P2 tunnels - no change.
- I failed to mention... the Xeon is pfSense running as a VM on Proxmox 5. It's the only VM, the CPU type is host, it has 16gb of ram allocated and direct disk access. So it's basically as close to the bare metal as it can be. But if anyone has any tips related to Prox and aes performance, lay em on me!