Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    pfSense on ESXi 6.7 - slow throughput

    Scheduled Pinned Locked Moved Virtualization
    15 Posts 3 Posters 3.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • johnpozJ
      johnpoz LAYER 8 Global Moderator
      last edited by

      @helger said in pfSense on ESXi 6.7 - slow throughput:

      However I'm strugling with network throughput, WAN<->LAN. Seems to top out at around 60-80mbit no matter what I do.

      How exactly are you testing this?

      An intelligent man is sometimes forced to be drunk to spend time with his fools
      If you get confused: Listen to the Music Play
      Please don't Chat/PM me for help, unless mod related
      SG-4860 24.11 | Lab VMs 2.8, 24.11

      H 1 Reply Last reply Reply Quote 0
      • H
        helger @johnpoz
        last edited by

        @johnpoz https://speedtest.net, https://test.telenor.net, wget ftp://ftp.uninett.no/debian-iso/9.8.0/amd64/iso-dvd/debian-9.8.0-amd64-DVD-3.iso
        These are the 3 methods I've been using, all 3 give about the same result.
        Doing the same on a VM connected to the WAN vSwitch on the same ESXi host gives full speed.

        -Helge

        1 Reply Last reply Reply Quote 0
        • H
          helger
          last edited by

          Did a iperf test now as well, window on the left is my workstation behind the pfSense, the one on the right is a ubuntu server running on the same ESXi host as the pfSense and put directly on the WAN side.

          0_1550409267164_06da5b7f-18f1-463c-849b-f33208da6e47-image.png

          1 Reply Last reply Reply Quote 0
          • johnpozJ
            johnpoz LAYER 8 Global Moderator
            last edited by

            So your connecting to something out on the internet?

            Dude check from your wan to your lan, lan to your wan through pfsense.. And how many connections are you doing like 10? Just do 1 you should see pretty close to full speed of your local network speed..

            An intelligent man is sometimes forced to be drunk to spend time with his fools
            If you get confused: Listen to the Music Play
            Please don't Chat/PM me for help, unless mod related
            SG-4860 24.11 | Lab VMs 2.8, 24.11

            H 1 Reply Last reply Reply Quote 0
            • H
              helger @johnpoz
              last edited by

              @johnpoz Internet services yes. The above iperf is towards a public iperf server.
              There seems to be something in the pfSense setup that messes with me. Since as you see, there is a major difference in speed going directly to the iperf server vs via the pfSense.

              Single thread test:
              From a LAN PC:
              iperf -c speedtest.serverius.net -P 1

              Client connecting to speedtest.serverius.net, TCP port 5001
              TCP window size: 85.0 KByte (default)

              [ 3] local 10.83.90.52 port 35472 connected with 178.21.16.76 port 5001
              [ ID] Interval Transfer Bandwidth
              [ 3] 0.0-10.1 sec 110 MBytes 91.1 Mbits/sec

              From a VM directly attached to the WAN interface/vSwitch:
              iperf -c speedtest.serverius.net -P 1

              Client connecting to speedtest.serverius.net, TCP port 5001
              TCP window size: 85.0 KByte (default)

              [ 3] local 88.88.101.151 port 44368 connected with 178.21.16.76 port 5001
              [ ID] Interval Transfer Bandwidth
              [ 3] 0.0-10.0 sec 397 MBytes 332 Mbits/sec

              With 10 threads:
              From a LAN PC:
              iperf -c speedtest.serverius.net -P 10

              Client connecting to speedtest.serverius.net, TCP port 5001
              TCP window size: 85.0 KByte (default)

              [ 13] local 10.83.90.52 port 35514 connected with 178.21.16.76 port 5001
              [ 11] local 10.83.90.52 port 35508 connected with 178.21.16.76 port 5001
              [ 5] local 10.83.90.52 port 35512 connected with 178.21.16.76 port 5001
              [ 6] local 10.83.90.52 port 35496 connected with 178.21.16.76 port 5001
              [ 7] local 10.83.90.52 port 35500 connected with 178.21.16.76 port 5001
              [ 3] local 10.83.90.52 port 35510 connected with 178.21.16.76 port 5001
              [ 9] local 10.83.90.52 port 35498 connected with 178.21.16.76 port 5001
              [ 12] local 10.83.90.52 port 35502 connected with 178.21.16.76 port 5001
              [ 4] local 10.83.90.52 port 35506 connected with 178.21.16.76 port 5001
              [ 8] local 10.83.90.52 port 35504 connected with 178.21.16.76 port 5001
              [ ID] Interval Transfer Bandwidth
              [ 6] 0.0-10.0 sec 15.1 MBytes 12.7 Mbits/sec
              [ 3] 0.0-10.0 sec 9.75 MBytes 8.17 Mbits/sec
              [ 7] 0.0-10.1 sec 10.5 MBytes 8.74 Mbits/sec
              [ 9] 0.0-10.1 sec 15.1 MBytes 12.6 Mbits/sec
              [ 12] 0.0-10.1 sec 7.75 MBytes 6.43 Mbits/sec
              [ 4] 0.0-10.2 sec 9.88 MBytes 8.13 Mbits/sec
              [ 5] 0.0-10.2 sec 9.88 MBytes 8.11 Mbits/sec
              [ 11] 0.0-10.3 sec 15.4 MBytes 12.6 Mbits/sec
              [ 13] 0.0-10.3 sec 10.1 MBytes 8.21 Mbits/sec
              [ 8] 0.0-10.3 sec 10.0 MBytes 8.11 Mbits/sec
              [SUM] 0.0-10.3 sec 114 MBytes 92.0 Mbits/sec

              From a VM directly attached to the WAN interface/vSwitch:
              iperf -c speedtest.serverius.net -P 10

              Client connecting to speedtest.serverius.net, TCP port 5001
              TCP window size: 85.0 KByte (default)

              [ 5] local 88.88.101.151 port 44372 connected with 178.21.16.76 port 5001
              [ 3] local 88.88.101.151 port 44376 connected with 178.21.16.76 port 5001
              [ 6] local 88.88.101.151 port 44374 connected with 178.21.16.76 port 5001
              [ 7] local 88.88.101.151 port 44380 connected with 178.21.16.76 port 5001
              [ 8] local 88.88.101.151 port 44382 connected with 178.21.16.76 port 5001
              [ 10] local 88.88.101.151 port 44386 connected with 178.21.16.76 port 5001
              [ 9] local 88.88.101.151 port 44384 connected with 178.21.16.76 port 5001
              [ 4] local 88.88.101.151 port 44378 connected with 178.21.16.76 port 5001
              [ 11] local 88.88.101.151 port 44388 connected with 178.21.16.76 port 5001
              [ 12] local 88.88.101.151 port 44390 connected with 178.21.16.76 port 5001
              [ ID] Interval Transfer Bandwidth
              [ 3] 0.0-10.0 sec 61.4 MBytes 51.5 Mbits/sec
              [ 6] 0.0-10.0 sec 66.0 MBytes 55.4 Mbits/sec
              [ 12] 0.0-10.0 sec 39.6 MBytes 33.2 Mbits/sec
              [ 10] 0.0-10.0 sec 51.0 MBytes 42.7 Mbits/sec
              [ 9] 0.0-10.0 sec 47.0 MBytes 39.3 Mbits/sec
              [ 4] 0.0-10.0 sec 57.2 MBytes 47.9 Mbits/sec
              [ 11] 0.0-10.0 sec 71.2 MBytes 59.5 Mbits/sec
              [ 7] 0.0-10.1 sec 50.9 MBytes 42.3 Mbits/sec
              [ 8] 0.0-10.1 sec 45.1 MBytes 37.5 Mbits/sec
              [ 5] 0.0-10.1 sec 100 MBytes 83.5 Mbits/sec
              [SUM] 0.0-10.1 sec 590 MBytes 490 Mbits/sec

              So as you can see, the performance is significantly better using the WAN interface/vSwitch of the ESXi host for a regular VM vs trough the pfSense.
              So I can't imagine it being anything else than some sort of setting or incompability issue in the pfSense setup.

              1 Reply Last reply Reply Quote 0
              • H
                helger
                last edited by helger

                Another test done, set up iperf server on the VM connected to the WAN directly and running iperf client on the LAN host.
                Same speed achived:

                1 thread:
                iperf -c 88.88.101.151 -P 1

                Client connecting to 88.88.101.151, TCP port 5001
                TCP window size: 85.0 KByte (default)

                [ 3] local 10.83.90.52 port 54354 connected with 88.88.101.151 port 5001
                [ ID] Interval Transfer Bandwidth
                [ 3] 0.0-10.1 sec 112 MBytes 92.4 Mbits/sec

                5 threads:
                iperf -c 88.88.101.151 -P 5

                Client connecting to 88.88.101.151, TCP port 5001
                TCP window size: 85.0 KByte (default)

                [ 5] local 10.83.90.52 port 54380 connected with 88.88.101.151 port 5001
                [ 7] local 10.83.90.52 port 54384 connected with 88.88.101.151 port 5001
                [ 6] local 10.83.90.52 port 54382 connected with 88.88.101.151 port 5001
                [ 4] local 10.83.90.52 port 54378 connected with 88.88.101.151 port 5001
                [ 3] local 10.83.90.52 port 54376 connected with 88.88.101.151 port 5001
                [ ID] Interval Transfer Bandwidth
                [ 6] 0.0-10.0 sec 54.2 MBytes 45.4 Mbits/sec
                [ 3] 0.0-10.0 sec 14.6 MBytes 12.2 Mbits/sec
                [ 7] 0.0-10.1 sec 15.0 MBytes 12.5 Mbits/sec
                [ 5] 0.0-10.2 sec 13.9 MBytes 11.4 Mbits/sec
                [ 4] 0.0-10.2 sec 14.8 MBytes 12.1 Mbits/sec
                [SUM] 0.0-10.2 sec 112 MBytes 92.2 Mbits/sec

                Connecting to the WAN VM from a host out on the internet:
                iperf -c 88.88.101.151 -P 5

                Client connecting to 88.88.101.151, TCP port 5001
                TCP window size: 85.0 KByte (default)

                [ 3] local 185.125.169.246 port 41574 connected with 88.88.101.151 port 5001
                [ 4] local 185.125.169.246 port 41576 connected with 88.88.101.151 port 5001
                [ 5] local 185.125.169.246 port 41578 connected with 88.88.101.151 port 5001
                [ 6] local 185.125.169.246 port 41580 connected with 88.88.101.151 port 5001
                [ 7] local 185.125.169.246 port 41582 connected with 88.88.101.151 port 5001
                [ ID] Interval Transfer Bandwidth
                [ 6] 0.0-10.0 sec 102 MBytes 85.9 Mbits/sec
                [ 4] 0.0-10.0 sec 122 MBytes 102 Mbits/sec
                [ 3] 0.0-10.1 sec 131 MBytes 109 Mbits/sec
                [ 7] 0.0-10.1 sec 118 MBytes 98.6 Mbits/sec
                [ 5] 0.0-10.1 sec 114 MBytes 94.5 Mbits/sec
                [SUM] 0.0-10.1 sec 588 MBytes 487 Mbits/sec

                1 Reply Last reply Reply Quote 0
                • johnpozJ
                  johnpoz LAYER 8 Global Moderator
                  last edited by

                  @helger said in pfSense on ESXi 6.7 - slow throughput:

                  [ ID] Interval Transfer Bandwidth
                  [ 3] 0.0-10.1 sec 112 MBytes 92.4 Mbits/sec

                  You go something wrong for sure.. But stop testing with multiple connections... from wan to lan or lan to wan you should see pretty freaking close to full speed.. Until you have that working forget testing anything else.

                  An intelligent man is sometimes forced to be drunk to spend time with his fools
                  If you get confused: Listen to the Music Play
                  Please don't Chat/PM me for help, unless mod related
                  SG-4860 24.11 | Lab VMs 2.8, 24.11

                  H 1 Reply Last reply Reply Quote 0
                  • H
                    helger @johnpoz
                    last edited by

                    @johnpoz Sure, can try with single threads. Just with multiple I'm sure I get max the pipe can do which isn't always the case for single threads, but ok single from now on.
                    So to the issue at hand, any thoughts to what could be tested in terms of settings or things to check?

                    1 Reply Last reply Reply Quote 0
                    • H
                      helger
                      last edited by

                      Nevermind, I'll go hide in shame now.... Didn't realize 1 NIC had "decided" to turn off autoneg and set itself in 100HD ... Jeezes..

                      1 Reply Last reply Reply Quote 0
                      • johnpozJ
                        johnpoz LAYER 8 Global Moderator
                        last edited by

                        So what is your speed now local testing? Single thread?

                        Here I Just fired up ubuntu vm on my win10 box for testing using hyper-v.. Its a OLD box... dell xps8300... Running pfsense vm on it.. Where wan is connected vswitch to my physical network.. On a crap realtek nic ;)

                        So you go

                        iperf3 client on Unbund vm - vswitch - pfsense vm - vswitch - real nic --- switch --- nic NAS --- running iperf server in a docker..

                        And hitting pretty freaking close to speed with 840ish.. Now the physical box does 940 to the same iperf3 server... But ok with that performance ;)

                        An intelligent man is sometimes forced to be drunk to spend time with his fools
                        If you get confused: Listen to the Music Play
                        Please don't Chat/PM me for help, unless mod related
                        SG-4860 24.11 | Lab VMs 2.8, 24.11

                        H 1 Reply Last reply Reply Quote 0
                        • H
                          helger @johnpoz
                          last edited by helger

                          @johnpoz Getting the full gbit speed, tried between firewall zones, but inside the ESXi host. pfSense pushing just shy of 3Gbit.
                          Guess thats quite fair with a dual core.

                          With suricata turned on I got just above 1Gbit.

                          Physical workstation to a VM traversing zones (client<->server)
                          [ ID] Interval Transfer Bandwidth
                          [ 3] 0.0-10.0 sec 1.06 GBytes 913 Mbits/sec

                          I'm happy!

                          1 Reply Last reply Reply Quote 0
                          • T
                            tsis
                            last edited by

                            @helger said in pfSense on ESXi 6.7 - slow throughput:

                            Getting the full gbit speed, tried between firewall zones, but inside the ESXi host. pfSense pushing just shy of 3Gbit.
                            Guess thats quite fair with a dual core.
                            With suricata turned on I got just above 1Gbit.
                            Physical workstation to a VM traversing zones (client<->server)
                            [ ID] Interval Transfer Bandwidth

                            hello all

                            the same problems with 6.7 U2 ....

                            regards

                            1 Reply Last reply Reply Quote 0
                            • johnpozJ
                              johnpoz LAYER 8 Global Moderator
                              last edited by johnpoz

                              There was NO problem...
                              "Nevermind, I'll go hide in shame now.... Didn't realize 1 NIC had "decided" to turn off autoneg and set itself in 100HD ... Jeezes.."

                              If you are seeing 90'ish mbps when you think you should see gig... Same issue more than likely!

                              An intelligent man is sometimes forced to be drunk to spend time with his fools
                              If you get confused: Listen to the Music Play
                              Please don't Chat/PM me for help, unless mod related
                              SG-4860 24.11 | Lab VMs 2.8, 24.11

                              T 1 Reply Last reply Reply Quote 0
                              • T
                                tsis @johnpoz
                                last edited by

                                @johnpoz

                                hello

                                I did not understand it well you removed the autonegotiated in the WAN of the Esxi? or of the Pfsense? I will prove it

                                Thank you

                                1 Reply Last reply Reply Quote 0
                                • First post
                                  Last post
                                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.