Slow IPSec Site-to-Site Speeds
-
Good evening, everyone. I am running into an issue with IPSec Site-to-Site and slow speeds between two sites with nearly identical setups.
We basically run (3) things through this tunnel:
1.) VoIP Phone System (Avaya IP Office 500v2)
2.) Database Connections for Store Software (Site A has connections to Site B's DB and Site B has connections to Site A's DB)
3.) Domain Controller Synchronization, Late night backups, and other miscellaneous utilities running off hours (read after Midnight local time)I would assume that we should be able to get at least 100-160 Mbps across the tunnel; however, that is not the case.
When I use Speedtest.net, the sites have the following real-time speeds
Site A: Download - 201.09 Mbps; Upload - 201.09 Mbps
Site B: Download - 620.15 Mbps; Upload - 628 MbpsBefore I started all of this, we were getting about 24 Mbps Upload and 26 Mbps Download across the OpenVPN tunnel. I then upgraded to pfSense+ as I was told this would enable OpenVPN Data Channel Offloading, which makes OpenVPN. Even WITH that enabled, I was still only able to get, at max 35Mbps Upload and 38Mbps Download across the OpenVPN tunnel.
These slow speeds are causing some serious issues with accessing the software at the opposite sites (read 3+ minutes to load some of the data, and to be fair I know that there is serious database design/query issues involved; however, it shouldn't be that long given how "small" the system is), and it's even causing the tunnels to be overloaded during this access which during the day is causing phones calls to drop randomly when both sides are access the others systems.
I decided to switch to IPSec on Saturday as I know several people who have said they get faster speeds, and now I am getting somewhat faster speeds, but they still are falling short of what these connections should be getting.
Here is my current configuration at each site with IPSec enabled:
Site A:
(2) Dell 1U servers w/dual E5-2620 (24 cores@ 2.00GHz)
32GB Memory
HA Setup with (1) WAN, (1) LAN, and (1) HASYNC connection
pfSense+ v 24.11
200Mbps by 200 Mbps COX Fiber Connection
WAN Connection direct to internet (No NAT)
LAN Network is CAT6 with Gigabit Switches (max run 220')
IPSec policy setup and connected to Site B
P1 Tunnel Settings:- IKE version: IKEv2
- Mutual PSK w/ My IP Address & Peer IP Address w/ 128 character PSK
- Encryption Algorithm: AES, 256b key, SHA256 Hash, 14 DH Group
P2 Settings:
- Protocol: ESP
- Encryption: AES256-GCM, 128b
- Hash: None
- PFS Key: 14
Site B:
(2) Dell 1U servers w/dual E5-2620 (24 cores@ 2.00GHz)
32GB Memory
HA Setup with (1) WAN, (1) LAN, and (1) HASYNC connection
pfSense+ v 24.11
500Mbps by 500 Mbps COX Fiber Connection
WAN Connection direct to internet (No NAT)
Network is CAT6 with Gigabit Switches (max run 115')
IPSec policy setup and connected to Site A
P1 Tunnel Settings:- IKE version: IKEv2
- Mutual PSK w/ My IP Address & Peer IP Address w/ 128 character PSK
- Encryption Algorithm: AES, 256b key, SHA256 Hash, 14 DH Group
P2 Settings:
- Protocol: ESP
- Encryption: AES256-GCM, 128b
- Hash: None
- PFS Key: 14
As mentioned, IPSec is up and running, and now when I do a simple LAN speed test with a 50MB file, these are the results:
Site A to Site B:
Upload - 43.67Mbps
Download - 86.6 MbpsSite B to Site A:
Upload - 61.57 Mbps
Download - 42.12 MbpsDoes anyone have any ideas to speed this up (outside of fixing the database querying (which I cannot do))?
Thanks,
TSoF
Edit: Updated to confirm that these devices are connected directly to the internet and are not being NAT
Edit 2: Updated to include the memory installed (4 @ 8GB p/Module) -
I am working on troubleshooting through iperf and here is what I have going public IP to public IP (read not through IPSec Tunnel) and private IP to private IP (read through the IPSec Tunnel)
Public IP to Public IP - Site B to Site A
iperf 3.17.1
FreeBSD ngr2-rtr01.mynextgenrx.com 15.0-CURRENT FreeBSD 15.0-CURRENT #0 plus-RELENG_24_11-n256407-1bbb3194162: Fri Nov 22 05:08:46 UTC 2024 root@freebsd:/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/obj/amd64/AKWlAIiM/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/sources/FreeBS amd64
Control connection MSS 1460
Time: Mon, 03 Mar 2025 16:13:38 UTC
Connecting to host 207.162.137.152, port 5201
Cookie: g3oi22k7ksrx6tkmxub52glxdgoc7l2xocks
TCP MSS: 1460 (default)
[ 5] local 75.61.85.194 port 15753 connected to 207.162.137.152 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 10.9 MBytes 90.8 Mbits/sec 0 399 KBytes
[ 5] 1.00-2.06 sec 12.9 MBytes 102 Mbits/sec 0 399 KBytes
[ 5] 2.06-3.06 sec 12.2 MBytes 103 Mbits/sec 0 399 KBytes
[ 5] 3.06-4.00 sec 11.5 MBytes 102 Mbits/sec 0 399 KBytes
[ 5] 4.00-5.01 sec 12.2 MBytes 102 Mbits/sec 0 399 KBytes
[ 5] 5.01-6.03 sec 12.9 MBytes 106 Mbits/sec 0 444 KBytes
[ 5] 6.03-7.06 sec 15.5 MBytes 126 Mbits/sec 0 522 KBytes
[ 5] 7.06-8.05 sec 15.4 MBytes 131 Mbits/sec 42 181 KBytes
[ 5] 8.05-9.00 sec 5.75 MBytes 50.4 Mbits/sec 0 213 KBytes
[ 5] 9.00-10.00 sec 6.88 MBytes 57.7 Mbits/sec 0 233 KBytes
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 116 MBytes 97.4 Mbits/sec 42 sender
[ 5] 0.00-10.04 sec 115 MBytes 96.2 Mbits/sec receiver
CPU Utilization: local/sender 12.4% (0.0%u/12.4%s), remote/receiver 9.3% (2.0%u/7.3%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubiciperf Done.
Private IP to Private IP THROUGH IPSec - Site B to Site A
Connecting to host 10.1.0.2, port 5201
[ 5] local 10.2.0.2 port 31524 connected to 10.1.0.2 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 16.0 MBytes 134 Mbits/sec 0 1.07 MBytes
[ 5] 1.00-2.01 sec 16.0 MBytes 133 Mbits/sec 411 218 KBytes
[ 5] 2.01-3.00 sec 7.12 MBytes 60.2 Mbits/sec 0 257 KBytes
[ 5] 3.00-4.04 sec 8.50 MBytes 68.9 Mbits/sec 0 281 KBytes
[ 5] 4.04-5.00 sec 8.50 MBytes 73.8 Mbits/sec 0 293 KBytes
[ 5] 5.00-6.06 sec 9.50 MBytes 75.0 Mbits/sec 0 306 KBytes
[ 5] 6.06-7.00 sec 9.12 MBytes 81.6 Mbits/sec 0 327 KBytes
[ 5] 7.00-8.00 sec 10.4 MBytes 86.9 Mbits/sec 0 351 KBytes
[ 5] 8.00-9.00 sec 10.9 MBytes 91.1 Mbits/sec 0 374 KBytes
[ 5] 9.00-10.05 sec 12.4 MBytes 99.3 Mbits/sec 0 398 KBytes
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.05 sec 108 MBytes 90.5 Mbits/sec 411 sender
[ 5] 0.00-10.08 sec 107 MBytes 89.4 Mbits/sec receiveriperf Done.
Public IP to Public IP - Site A to Site B
iperf 3.17.1
FreeBSD ngr1-rtr01.mynextgenrx.com 15.0-CURRENT FreeBSD 15.0-CURRENT #0 plus-RELENG_24_11-n256407-1bbb3194162: Fri Nov 22 05:08:46 UTC 2024 root@freebsd:/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/obj/amd64/AKWlAIiM/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/sources/FreeBS amd64
Control connection MSS 1460
Time: Mon, 03 Mar 2025 16:53:33 UTC
Connecting to host 75.61.85.194, port 5201
Cookie: fediws4o47ceehawmnp3zegzg4zch3oa2wn3
TCP MSS: 1460 (default)
[ 5] local 207.162.137.152 port 33788 connected to 75.61.85.194 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.04 sec 11.8 MBytes 94.8 Mbits/sec 299 367 KBytes
[ 5] 1.04-2.06 sec 12.1 MBytes 99.6 Mbits/sec 0 412 KBytes
[ 5] 2.06-3.05 sec 12.9 MBytes 109 Mbits/sec 0 441 KBytes
[ 5] 3.05-4.06 sec 13.8 MBytes 114 Mbits/sec 0 458 KBytes
[ 5] 4.06-5.06 sec 14.0 MBytes 118 Mbits/sec 0 466 KBytes
[ 5] 5.06-6.05 sec 14.0 MBytes 118 Mbits/sec 0 468 KBytes
[ 5] 6.05-7.00 sec 13.5 MBytes 119 Mbits/sec 0 469 KBytes
[ 5] 7.00-8.01 sec 14.2 MBytes 119 Mbits/sec 0 482 KBytes
[ 5] 8.01-9.01 sec 15.0 MBytes 126 Mbits/sec 0 503 KBytes
[ 5] 9.01-10.00 sec 15.1 MBytes 128 Mbits/sec 0 523 KBytes
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 136 MBytes 114 Mbits/sec 299 sender
[ 5] 0.00-10.03 sec 135 MBytes 113 Mbits/sec receiver
CPU Utilization: local/sender 11.2% (0.0%u/11.1%s), remote/receiver 17.8% (1.2%u/16.7%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubiciperf Done.
Private IP to Private IP THROUGH IPSec - Site A to Site B
iperf 3.17.1
FreeBSD ngr1-rtr01.mynextgenrx.com 15.0-CURRENT FreeBSD 15.0-CURRENT #0 plus-RELENG_24_11-n256407-1bbb3194162: Fri Nov 22 05:08:46 UTC 2024 root@freebsd:/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/obj/amd64/AKWlAIiM/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/sources/FreeBS amd64
Control connection MSS 1460
Time: Mon, 03 Mar 2025 16:54:25 UTC
Connecting to host 10.2.0.2, port 5201
Cookie: aji4w5n3rzsobdygb2zxvqydl7mrciv3gum7
TCP MSS: 1460 (default)
[ 5] local 10.1.0.2 port 30411 connected to 10.2.0.2 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.06 sec 11.1 MBytes 87.8 Mbits/sec 210 362 KBytes
[ 5] 1.06-2.06 sec 11.6 MBytes 97.9 Mbits/sec 0 406 KBytes
[ 5] 2.06-3.02 sec 12.2 MBytes 107 Mbits/sec 0 434 KBytes
[ 5] 3.02-4.06 sec 14.1 MBytes 113 Mbits/sec 0 453 KBytes
[ 5] 4.06-5.01 sec 13.0 MBytes 115 Mbits/sec 0 462 KBytes
[ 5] 5.01-6.03 sec 14.4 MBytes 118 Mbits/sec 0 463 KBytes
[ 5] 6.03-7.06 sec 14.4 MBytes 117 Mbits/sec 0 464 KBytes
[ 5] 7.06-8.02 sec 13.6 MBytes 120 Mbits/sec 0 464 KBytes
[ 5] 8.02-9.06 sec 14.9 MBytes 119 Mbits/sec 0 482 KBytes
[ 5] 9.06-10.01 sec 14.1 MBytes 126 Mbits/sec 0 499 KBytes
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.01 sec 134 MBytes 112 Mbits/sec 210 sender
[ 5] 0.00-10.04 sec 133 MBytes 111 Mbits/sec receiver
CPU Utilization: local/sender 63.1% (0.1%u/63.0%s), remote/receiver 40.2% (2.2%u/38.0%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubiciperf Done.
Speeds seem better using iperf thought they are still somewhat slower than it should be.
-
As an update: I've increased both sites internet service to 1Gbps/1Gbps.
Going public ip to public ip iperf now gives me over 500Mbps on average, still only 1/2 (and i've confirmed speed tests are getting 990Mbps /990Mbps on each site.
However, across the ipsec tunnel we're only getting about 200 - 220 Mbps, about 1/5th of the tunne.
I have noticed a lot of 'retransmits' across the ipsec tunnel, and i've asked both Cox (site A) and ATT (site B) if they are filtering or anything, and both are looking into it; however, this is just mind boggling.
I even went into System / Advanced / Firewall & NAT and enabled MSS Clamping and have tried it at the default (1400), fiber threshold (1500), and a setting everyone says is best (1392) and none of them are really making a difference.
Here are iperf results:
Site A => Site B (public IP)
iperf 3.17.1
FreeBSD ngr1-rtr01 15.0-CURRENT FreeBSD 15.0-CURRENT #0 plus-RELENG_24_11-n256407-1bbb3194162: Fri Nov 22 05:08:46 UTC 2024 root@freebsd:/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/obj/amd64/AKWlAIiM/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/sources/FreeBS amd64
Control connection MSS 1460
Time: Fri, 21 Mar 2025 16:23:14 UTC
Connecting to host XX.XX.XX.XXX, port 5201
Cookie: cc2izicnhnnjse7iqk2nmex7qqd4fxg5mo7x
TCP MSS: 1460 (default)
[ 5] local XXX.XXX.XXX.XXX port 17124 connected to XX.XX.XX.XXX port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.01 sec 42.9 MBytes 355 Mbits/sec 0 2.00 MBytes
[ 5] 1.01-2.04 sec 62.9 MBytes 516 Mbits/sec 0 2.00 MBytes
[ 5] 2.04-3.06 sec 63.1 MBytes 517 Mbits/sec 0 2.00 MBytes
[ 5] 3.06-4.06 sec 61.2 MBytes 514 Mbits/sec 0 2.00 MBytes
[ 5] 4.06-5.06 sec 61.5 MBytes 516 Mbits/sec 0 2.00 MBytes
[ 5] 5.06-6.04 sec 59.9 MBytes 513 Mbits/sec 0 2.00 MBytes
[ 5] 6.04-7.06 sec 63.0 MBytes 517 Mbits/sec 0 2.00 MBytes
[ 5] 7.06-8.06 sec 60.9 MBytes 512 Mbits/sec 0 2.00 MBytes
[ 5] 8.06-9.01 sec 58.2 MBytes 516 Mbits/sec 0 2.00 MBytes
[ 5] 9.01-10.06 sec 64.8 MBytes 515 Mbits/sec 0 2.00 MBytes
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.06 sec 598 MBytes 499 Mbits/sec 0 sender
[ 5] 0.00-10.09 sec 598 MBytes 497 Mbits/sec receiver
CPU Utilization: local/sender 49.7% (0.1%u/49.6%s), remote/receiver 72.1% (2.2%u/69.9%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubiciperf Done.
Site A => Site B (private IP over ipsec vpn)
iperf 3.17.1
FreeBSD ngr1-rtr01 15.0-CURRENT FreeBSD 15.0-CURRENT #0 plus-RELENG_24_11-n256407-1bbb3194162: Fri Nov 22 05:08:46 UTC 2024 root@freebsd:/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/obj/amd64/AKWlAIiM/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/sources/FreeBS amd64
Control connection MSS 1460
Time: Fri, 21 Mar 2025 16:21:17 UTC
Connecting to host 10.2.0.2, port 5201
Cookie: xklht4ypo2ayrdjuqudpktjv26zhtfu6zxlj
TCP MSS: 1460 (default)
[ 5] local 10.1.0.2 port 43703 connected to 10.2.0.2 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 18.5 MBytes 155 Mbits/sec 0 1.07 MBytes
[ 5] 1.00-2.00 sec 23.4 MBytes 196 Mbits/sec 856 1.02 MBytes
[ 5] 2.00-3.00 sec 22.9 MBytes 192 Mbits/sec 0 1.14 MBytes
[ 5] 3.00-4.00 sec 26.5 MBytes 222 Mbits/sec 0 1.23 MBytes
[ 5] 4.00-5.00 sec 25.4 MBytes 213 Mbits/sec 349 812 KBytes
[ 5] 5.00-6.02 sec 22.6 MBytes 187 Mbits/sec 0 909 KBytes
[ 5] 6.02-7.01 sec 24.9 MBytes 210 Mbits/sec 0 978 KBytes
[ 5] 7.01-8.00 sec 23.8 MBytes 201 Mbits/sec 0 1.01 MBytes
[ 5] 8.00-9.00 sec 23.4 MBytes 196 Mbits/sec 0 1.04 MBytes
[ 5] 9.00-10.00 sec 24.1 MBytes 202 Mbits/sec 0 1.06 MBytes
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 236 MBytes 198 Mbits/sec 1205 sender
[ 5] 0.00-10.05 sec 236 MBytes 197 Mbits/sec receiver
CPU Utilization: local/sender 83.8% (0.0%u/83.7%s), remote/receiver 67.6% (2.6%u/65.1%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubiciperf Done.
Site B => Site A (public IP)
iperf 3.17.1
FreeBSD ngr2-rtr01. 15.0-CURRENT FreeBSD 15.0-CURRENT #0 plus-RELENG_24_11-n256407-1bbb3194162: Fri Nov 22 05:08:46 UTC 2024 root@freebsd:/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/obj/amd64/AKWlAIiM/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/sources/FreeBS amd64
Control connection MSS 1460
Time: Fri, 21 Mar 2025 16:25:47 UTC
Connecting to host XXX.XXX.XXX.XXX, port 5201
Cookie: zp5bvc6ptrizxhqpv22kephqjtr6lqkomay5
TCP MSS: 1460 (default)
[ 5] local XX.XX.XX.XXX port 32927 connected to XXX.XXX.XXX.XXX port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.01 sec 43.2 MBytes 360 Mbits/sec 0 2.00 MBytes
[ 5] 1.01-2.06 sec 68.0 MBytes 543 Mbits/sec 0 2.00 MBytes
[ 5] 2.06-3.03 sec 59.5 MBytes 513 Mbits/sec 0 2.00 MBytes
[ 5] 3.03-4.04 sec 66.6 MBytes 552 Mbits/sec 0 2.00 MBytes
[ 5] 4.04-5.03 sec 61.9 MBytes 527 Mbits/sec 0 2.00 MBytes
[ 5] 5.03-6.00 sec 59.9 MBytes 515 Mbits/sec 0 2.00 MBytes
[ 5] 6.00-7.01 sec 65.2 MBytes 544 Mbits/sec 0 2.00 MBytes
[ 5] 7.01-8.00 sec 65.2 MBytes 551 Mbits/sec 0 2.00 MBytes
[ 5] 8.00-9.01 sec 65.9 MBytes 551 Mbits/sec 0 2.00 MBytes
[ 5] 9.01-10.00 sec 63.1 MBytes 532 Mbits/sec 0 2.00 MBytes
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 619 MBytes 519 Mbits/sec 0 sender
[ 5] 0.00-10.03 sec 618 MBytes 517 Mbits/sec receiver
CPU Utilization: local/sender 68.7% (0.2%u/68.5%s), remote/receiver 51.6% (8.2%u/43.3%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubiciperf Done.
Site B => Site A (private IP over ipsec vpn)
iperf 3.17.1
FreeBSD ngr2-rtr01. 15.0-CURRENT FreeBSD 15.0-CURRENT #0 plus-RELENG_24_11-n256407-1bbb3194162: Fri Nov 22 05:08:46 UTC 2024 root@freebsd:/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/obj/amd64/AKWlAIiM/var/jenkins/workspace/pfSense-Plus-snapshots-24_11-main/sources/FreeBS amd64
Control connection MSS 1460
Time: Fri, 21 Mar 2025 16:27:24 UTC
Connecting to host 10.1.0.2, port 5201
Cookie: 4pllgm6rbn5vf77ca56qml2bbdts24hmwof2
TCP MSS: 1460 (default)
[ 5] local 10.2.0.2 port 25280 connected to 10.1.0.2 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 17.5 MBytes 147 Mbits/sec 0 1.07 MBytes
[ 5] 1.00-2.01 sec 24.8 MBytes 207 Mbits/sec 0 1.61 MBytes
[ 5] 2.01-3.06 sec 25.8 MBytes 206 Mbits/sec 0 1.61 MBytes
[ 5] 3.06-4.05 sec 24.0 MBytes 202 Mbits/sec 0 1.61 MBytes
[ 5] 4.05-5.01 sec 24.4 MBytes 214 Mbits/sec 0 1.61 MBytes
[ 5] 5.01-6.00 sec 31.1 MBytes 263 Mbits/sec 0 2.00 MBytes
[ 5] 6.00-7.00 sec 28.4 MBytes 238 Mbits/sec 0 2.00 MBytes
[ 5] 7.00-8.00 sec 23.2 MBytes 195 Mbits/sec 0 2.00 MBytes
[ 5] 8.00-9.00 sec 27.9 MBytes 234 Mbits/sec 0 2.00 MBytes
[ 5] 9.00-10.00 sec 34.4 MBytes 288 Mbits/sec 0 2.00 MBytes
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 262 MBytes 220 Mbits/sec 0 sender
[ 5] 0.00-10.03 sec 262 MBytes 219 Mbits/sec receiver
CPU Utilization: local/sender 91.3% (0.1%u/91.3%s), remote/receiver 53.4% (2.5%u/51.0%s)
snd_tcp_congestion cubic
rcv_tcp_congestion cubiciperf Done.
I am open to suggestions here.
Thanks in advance,
TSoF -
@TheStormsOfFury Yeah, these problems can be REALLY hard to troubleshoot and sometimes there just is no solution.
Obviously your problem is in the TCP “ramp up” of throughput from site A to site B - that where you are seeing a lot of dropped packets.
First: A few things to “clean up” the numbers a bit:1: Don’t test iPerf from pfsense itself, it does not - ever - really show how thing are because it not meant to terminate sessions CPU scheduling wise. You need to run the iPerf client and server on internal clients on the networks so pfSense just routes.
2: You REALLY need to set the MSS Clamping to 1400 now that you have symetric Gbit. Otherwise you will have serious fragmentation polution in the IPsec tunnel numbers - making it very unpredictable for iPerf tests.
3: Have you enabled IPsec-MB crypto or AES-NI crypto CPU support? Your CPU performance seems to be seriously impacted by the IPSec tunnel tests. Thats mainly from running the iPerf server/client itself, but still there is a huge difference between the public/public and private/private tests in CPU usage. Much more than there should be on a server of that config.
Anyways - once these are all in place, you need to run tests with more than one session to see if that makes it scale (use “-P 4” on the client).
You also need to test both directions from the client on both sides - to see if it behaves similarly (use “-R” on the client)This will likely not solve anything, but since there seems to be a packet drop issue inside the tunnel at higher throughput, we need to see if that is total bandwidth or single session capped. If it’s the first it might be something ISP related.
Sidenote: Remove one CPU from both servers. Your hardware is wastly overpowered, and SMP is unecessary latency for anything running on the CPU that does not have the NIC as a local PCIe device (located on the other CPU). Again - this is very unlikely to impact these numbers unless pfSense is really bad at SMP scheduling (i have no idea).
EDIT: Then again - you really have high CPU usage numbers for unencrypted tests… Perhaps there is something causing pfSense to really perform badly on your SMP setup? -
@TheStormsOfFury Fotgot to mention - you should be able to get something close to gbit in the public to public test. But you obviously aren’t in single session tests. Please make sure to do the parallel tests on public/public mode. We need to know if it’s your ISP throtteling/oversubscribing, or if it’s latency/out-of-order delivery between sites that causes this.
What is the Ping Return time between the sites?