Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    slow pfsense IPSec performance

    Scheduled Pinned Locked Moved General pfSense Questions
    52 Posts 6 Posters 9.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      Mmm, 1ms between then is like in the same data center. Or at least geographically very close. What is the route between them?

      When you ran the test outside the tunnel, how was that done? Still between the two ESXi hosts?

      M 1 Reply Last reply Reply Quote 0
      • stephenw10S
        stephenw10 Netgate Administrator @Cool_Corona
        last edited by

        @cool_corona said in slow pfsense IPSec performance:

        Nobody uses Atoms for Virtualization.....

        Ha. Assume nothing! 😉

        M 1 Reply Last reply Reply Quote 0
        • M
          mauro.tridici @Cool_Corona
          last edited by

          @cool_corona my ISP is not throttling the VPNs, we already use several VPNs (host to LAN) without any problem (iperf test bitrate is optimal). We are experiencing this low bitrate only with IPSEC LAN2LAN VPN

          1 Reply Last reply Reply Quote 1
          • M
            mauro.tridici @stephenw10
            last edited by

            @stephenw10 we have 2 data centres in the same city, they are interconnected with a dedicated 1Gb link on the GARR network.
            the test outside the tunnel is between the WAN interfaces of the two pfsense instances: that is between PF_A and PF_B

            1 Reply Last reply Reply Quote 0
            • M
              mauro.tridici @stephenw10
              last edited by

              @stephenw10 our hypervisors have "2 x Intel(R) Xeon(R) Gold 5218 CPU - 32 cores @ 2.30GHz"

              keyserK 1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                Hmm, that should be plenty fast enough. What happens if you test across the tunnel between the two pfSense instances directly? So set the source IP on the client to be in the P2.

                M 1 Reply Last reply Reply Quote 0
                • keyserK
                  keyser Rebel Alliance @mauro.tridici
                  last edited by

                  @mauro-tridici said in slow pfsense IPSec performance:

                  @stephenw10 our hypervisors have "2 x Intel(R) Xeon(R) Gold 5218 CPU - 32 cores @ 2.30GHz"

                  Then it’s definitely not hardware that is limiting the transferspeed. Those CPU’s/platforms have loads of power for this usecase.

                  Love the no fuss of using the official appliances :-)

                  1 Reply Last reply Reply Quote 1
                  • M
                    mauro.tridici @stephenw10
                    last edited by

                    @stephenw10 Sorry, I didn't understand the test I should do?
                    Should I do an iperf or a ping test between PF_A[opt1] and PF_B[opt2]?

                    PF_A has
                    WAN IP: xxxxxxxx
                    LAN IP (for management only): 192.168.240.11
                    OPT1 IP: 192.168.202.1

                    PF_B has
                    WAN IP: yyyyyyyy
                    LAN IP (for management only): 192.168.220.123
                    OPT1 IP: 192.168.201.1

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      Normally you run iperf3 server on one pfSense box then run iperf3 client on the other one and give it the WAN address of the first one to connect to.

                      But to test over the VPN the traffic has to match the defined P2 policy so at the client end you need to set the Bind address to, say, the LAN IP and then point it at the LAN IP of the server end.

                      Then you are testing directly across the tunnel without going through any internal interfaces that might be throttling.

                      So run iperf3 -s on the PF_A as normal.
                      Then on PF_B run iperf3 -c -B 192.168.220.123 192.168.240.11

                      M 1 Reply Last reply Reply Quote 0
                      • M
                        mauro.tridici @stephenw10
                        last edited by

                        @stephenw10 ok, thanks. this is the output of the iperf test:

                        iperf3 -B 192.168.201.1 -c 192.168.202.1
                        Connecting to host 192.168.202.1, port 5201
                        [ 5] local 192.168.201.1 port 2715 connected to 192.168.202.1 port 5201
                        [ ID] Interval Transfer Bitrate Retr Cwnd
                        [ 5] 0.00-1.04 sec 25.5 MBytes 205 Mbits/sec 0 639 KBytes
                        [ 5] 1.04-2.01 sec 27.0 MBytes 234 Mbits/sec 0 720 KBytes
                        [ 5] 2.01-3.03 sec 29.7 MBytes 246 Mbits/sec 0 736 KBytes
                        [ 5] 3.03-4.01 sec 28.2 MBytes 240 Mbits/sec 1 439 KBytes
                        [ 5] 4.01-5.05 sec 28.2 MBytes 229 Mbits/sec 0 521 KBytes
                        [ 5] 5.05-6.04 sec 25.2 MBytes 213 Mbits/sec 0 585 KBytes
                        [ 5] 6.04-7.01 sec 25.8 MBytes 222 Mbits/sec 0 642 KBytes
                        [ 5] 7.01-8.01 sec 28.4 MBytes 240 Mbits/sec 0 701 KBytes
                        [ 5] 8.01-9.00 sec 28.0 MBytes 235 Mbits/sec 0 735 KBytes
                        [ 5] 9.00-10.04 sec 29.1 MBytes 237 Mbits/sec 0 735 KBytes


                        [ ID] Interval Transfer Bitrate Retr
                        [ 5] 0.00-10.04 sec 275 MBytes 230 Mbits/sec 1 sender
                        [ 5] 0.00-10.13 sec 275 MBytes 228 Mbits/sec receiver

                        Please note that the interfaces OPT1 are the ones involved in the P2. LAN interfaces are used only to reach and manage the pfsense instances.

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by stephenw10

                          Ok, so no difference. Do you see any imrovement with more parallel streams? -P 4

                          Edit: Or actually slightly slower but testing from the firewall itself usually is.

                          M 1 Reply Last reply Reply Quote 0
                          • M
                            mauro.tridici @stephenw10
                            last edited by

                            @stephenw10 mmmh, no, no improvement, I'm sorry.
                            It is a big mystery :(
                            I don't know what I should check, where is my error...or the issue...

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              Like identical total throughput?

                              Really starts to look like some limiting somewhere if so.

                              You could try an OpenVPN or Wireguard tunnel instead.

                              1 Reply Last reply Reply Quote 0
                              • A
                                Averlon
                                last edited by Averlon

                                What is the NIC Type of theses VMs (vmxnet3 or e1000)?

                                You may want to check your IPsec Advanced Settings:

                                89e0675f-1839-4361-9845-79e802ebf1a1-image.png

                                Ensure AES-NI is enabled and running:

                                fc3e5231-65c4-47d8-a94b-6520580e0378-image.png

                                The throughput is definitely to low on your VMs. Even unencrypted traffic, isn't hitting line rate. Assuming the Gigabit link is idling.
                                I measured on a 4 CPU VM, running on the Xeon E3 Box, around 600Mbit/s IPSec throughput.

                                @stephenw10 said in slow pfsense IPSec performance:

                                You could try an OpenVPN or Wireguard tunnel instead.

                                I don't think OpenVPN can outperform IPsec, at least without DCO. Haven't tried that feature yet :)

                                M 1 Reply Last reply Reply Quote 0
                                • stephenw10S
                                  stephenw10 Netgate Administrator
                                  last edited by

                                  It is surprising with DCO but I would still use IPSec in a situation like this. I only suggested that as a test in case there is something throttling IPSec specifically.

                                  Definitely worth checking async-cypto. Usually that kills throughput almost completely on hardware that isn't compatible though.

                                  Steve

                                  1 Reply Last reply Reply Quote 0
                                  • M
                                    mauro.tridici @Averlon
                                    last edited by

                                    @averlon sorry for my late answer, but I have been busy during the last days.
                                    AES-NI is available only on one of the hypervisor, so I can't enable it on both ends.

                                    I would like to know the level of impact of NIC types of the VMs.
                                    One VM uses vmxnet3 nic, the other one is using e1000. What do you suggest to do with these nics?

                                    Thank you,
                                    Mauro

                                    A 1 Reply Last reply Reply Quote 0
                                    • stephenw10S
                                      stephenw10 Netgate Administrator
                                      last edited by

                                      Unless you're passing through hardware vmxnet will be faster.

                                      But you must add a tunable to enable mutli-queue on them:
                                      https://docs.netgate.com/pfsense/en/latest/hardware/tune.html#vmware-vmx-4-interfaces

                                      Steve

                                      M 1 Reply Last reply Reply Quote 1
                                      • A
                                        Averlon @mauro.tridici
                                        last edited by

                                        @mauro-tridici said in slow pfsense IPSec performance:

                                        AES-NI is available only on one of the hypervisor, so I can't enable it on both ends.

                                        This is most likely the bottleneck here. You have to ensure, that AES-NI is available on both ends. Otherwise you won't see any higher throughput with 4 vCPUs for you IPsec traffic.

                                        @stephenw10 said in slow pfsense IPSec performance:

                                        Unless you're passing through hardware vmxnet will be faster.

                                        vmxnet will have less overhead, but won't deliver necessary more throughput. I had horrible performance on pfSense 2.4.x / FreeBSD 11.x with vmxnet3. Something between average 600 and 700 Mbit with a high variance for bulk downloads.
                                        I'm still using e1000 on good old pfSense 2.5.1. For Gigabit Link it delivers almost full rate. I just re-tested it from a VM NAT'ed by pfSense on the same ESXi. This is the same old Xeon E3-1245v6 Box

                                        7b35edac-9c3d-4cd4-abd9-acf8cd3d1909-image.png

                                        @mauro-tridici: You should test both NIC Types with the current Version and implement the tunable stephenw mentioned. It may improve non encrypted throughput, but won't solve you IPsec issue.

                                        M 1 Reply Last reply Reply Quote 1
                                        • M
                                          mauro.tridici @stephenw10
                                          last edited by

                                          @stephenw10 nothing to do, VMXNET + tunable didn't help me. thank you again for you support.

                                          I think I should change from IPSEC to a different lan to lan vpn solution.
                                          Could you please say me the solution you suggest?

                                          Thank you in advance,
                                          Mauro

                                          1 Reply Last reply Reply Quote 0
                                          • M
                                            mauro.tridici @Averlon
                                            last edited by

                                            @averlon thank you for your support. unfortunately, vmxnet and tunable didn't help and I think I have to give up.
                                            non encrypted throughput between the WAN interface of the two pfsense instances is very good. encrypted traffic on IPSEC tunnel is very poor...

                                            Is there any other solution to create a lan to lan vpn easily ?

                                            Thank you,
                                            Mauro

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.