• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

Pfsense 2.1 + esxi poor network performance

Scheduled Pinned Locked Moved
2.1 Snapshot Feedback and Problems - RETIRED
7
16
7.6k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J
    jpmenil
    last edited by Aug 8, 2013, 1:29 PM

    Hi,

    i observe poor network performance with pfsense when used with esxi.

    All my test are make with iperf with firewalling disabled.
    I've run different test with e1000 and vmxnet3 driver, pfsense (2.03 and 2.1) 32 and 64 bits, freebsd and debian distribution.

    Here the result of my test:

    pfsense e1000

    [2.1-RC0][root@pfsense2.localdomain]/root(1): /usr/local/bin/iperf -c 172.17.8.32
    –----------------------------------------------------------
    Client connecting to 172.17.8.32, TCP port 5001
    TCP window size: 65.0 KByte (default)

    [  3] local 172.17.8.33 port 28383 connected with 172.17.8.32 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.2 sec  276 MBytes  228 Mbits/sec

    pfsense vmxnet2

    [2.1-RC0][root@pfsense2.localdomain]/root(2): /usr/local/bin/iperf -c 172.17.8.28
    –----------------------------------------------------------
    Client connecting to 172.17.8.28, TCP port 5001
    TCP window size: 65.0 KByte (default)

    [  3] local 172.17.8.29 port 37385 connected with 172.17.8.28 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-79.6 sec  4.02 GBytes  434 Mbits/sec

    pfsense vmxnet3

    [2.1-RC0][root@pfsense2.localdomain]/root(1): /usr/local/bin/iperf -c 172.17.8.28
    –----------------------------------------------------------
    Client connecting to 172.17.8.28, TCP port 5001
    TCP window size: 65.0 KByte (default)

    [  3] local 172.17.8.29 port 59092 connected with 172.17.8.28 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.0 sec  3.15 GBytes  2.71 Gbits/sec

    debian e1000

    iperf -c 172.17.8.36
    –----------------------------------------------------------
    Client connecting to 172.17.8.36, TCP port 5001
    TCP window size: 23.5 KByte (default)

    [  3] local 172.17.8.37 port 40286 connected with 172.17.8.36 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.0 sec  2.57 GBytes  2.21 Gbits/sec

    debian vmxnet3

    iperf -c 172.17.8.36
    –----------------------------------------------------------
    Client connecting to 172.17.8.36, TCP port 5001
    TCP window size: 23.5 KByte (default)

    [  3] local 172.17.8.37 port 41585 connected with 172.17.8.36 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.0 sec  9.84 GBytes  8.45 Gbits/sec

    freebsd 8.3 e1000

    freebsd2# /usr/local/bin/iperf -c 172.17.8.38
    –----------------------------------------------------------
    Client connecting to 172.17.8.38, TCP port 5001
    TCP window size: 32.5 KByte (default)

    [  3] local 172.17.8.39 port 30375 connected with 172.17.8.38 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.0 sec  2.26 GBytes  1.94 Gbits/sec

    freebsd 8.3 vmxnet3

    freebsd2# /usr/local/bin/iperf -c 172.17.8.38
    –----------------------------------------------------------
    Client connecting to 172.17.8.38, TCP port 5001
    TCP window size: 32.5 KByte (default)

    [  3] local 172.17.8.39 port 31077 connected with 172.17.8.38 port 5001

    pfsense e1000

    [2.1-RC0][root@pfsense2.localdomain]/root(1): /usr/local/bin/iperf -c 172.17.8.32
    –----------------------------------------------------------
    Client connecting to 172.17.8.32, TCP port 5001
    TCP window size: 65.0 KByte (default)

    [  3] local 172.17.8.33 port 28383 connected with 172.17.8.32 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.2 sec  276 MBytes  228 Mbits/sec

    pfsense vmxnet2

    [2.1-RC0][root@pfsense2.localdomain]/root(2): /usr/local/bin/iperf -c 172.17.8.28
    –----------------------------------------------------------
    Client connecting to 172.17.8.28, TCP port 5001
    TCP window size: 65.0 KByte (default)

    [  3] local 172.17.8.29 port 37385 connected with 172.17.8.28 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-79.6 sec  4.02 GBytes  434 Mbits/sec

    pfsense vmxnet3

    [2.1-RC0][root@pfsense2.localdomain]/root(1): /usr/local/bin/iperf -c 172.17.8.28
    –----------------------------------------------------------
    Client connecting to 172.17.8.28, TCP port 5001
    TCP window size: 65.0 KByte (default)

    [  3] local 172.17.8.29 port 59092 connected with 172.17.8.28 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.0 sec  3.15 GBytes  2.71 Gbits/sec

    freebsd 9.1 e1000

    root@freebsd2:/root # /usr/local/bin/iperf -c 172.17.8.38
    –----------------------------------------------------------
    Client connecting to 172.17.8.38, TCP port 5001
    TCP window size: 32.5 KByte (default)

    [  3] local 172.17.8.39 port 25752 connected with 172.17.8.38 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.0 sec  2.94 GBytes  2.53 Gbits/sec

    freebsd 9.1 vmxnet3

    root@freebsd2:/root # /usr/local/bin/iperf -c 172.17.8.38
    –----------------------------------------------------------
    Client connecting to 172.17.8.38, TCP port 5001
    TCP window size: 32.5 KByte (default)

    [  3] local 172.17.8.39 port 54521 connected with 172.17.8.38 port 5001
    [ ID] Interval      Transfer    Bandwidth
    [  3]  0.0-10.0 sec  6.72 GBytes  5.77 Gbits/sec

    I can't understand why pfsense has not the same result than freebsd.
    Is anyone aware of a culprit?

    By the way, i observe better performance for pfsense under kvm with virtio than xen.

    Best regards.

    1 Reply Last reply Reply Quote 0
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by Aug 8, 2013, 1:39 PM

      Interesting question and useful numbers which is always good. You seem to have missed out some of the iperf results above. Do you have those numbers?

      Steve

      1 Reply Last reply Reply Quote 0
      • J
        jpmenil
        last edited by Aug 8, 2013, 2:43 PM Aug 8, 2013, 2:21 PM

        Effectively, my fault, bad copy/paste.
        I need to rebuild the vm …

        So the result for freebsd8.3

        e1000

        freebsd2# /usr/local/bin/iperf -c 172.17.8.226

        Client connecting to 172.17.8.226, TCP port 5001
        TCP window size: 32.5 KByte (default)

        [  3] local 172.17.8.227 port 21848 connected with 172.17.8.226 port 5001
        [ ID] Interval      Transfer    Bandwidth
        [  3]  0.0-10.0 sec  2.37 GBytes  2.04 Gbits/sec

        and the result with the vmxnet3 driver:

        freebsd2# /usr/local/bin/iperf -c 172.17.8.226
        –----------------------------------------------------------
        Client connecting to 172.17.8.226, TCP port 5001
        TCP window size: 32.5 KByte (default)

        [  3] local 172.17.8.227 port 56445 connected with 172.17.8.226 port 5001
        [ ID] Interval      Transfer    Bandwidth
        [  3]  0.0-10.0 sec  3.79 GBytes  3.26 Gbits/sec

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by Aug 8, 2013, 2:41 PM

          Ah, well the figure that stands out is FreeBSD 8.3 with vmxnet3. I assume it was far higher than pfSense, >5Gbps?

          Steve

          1 Reply Last reply Reply Quote 0
          • J
            jpmenil
            last edited by Aug 8, 2013, 3:00 PM

            Yes,

            but why performance with pfsense are under freebsd on the same bsd version?

            1 Reply Last reply Reply Quote 0
            • D
              dhatz
              last edited by Aug 8, 2013, 4:22 PM

              Had you enabled "pf" (even with a very simple ruleset) when testing stock FreeBSD 8.3 against pfSense ?

              1 Reply Last reply Reply Quote 0
              • J
                jpmenil
                last edited by Aug 9, 2013, 6:38 AM

                Nope,

                because i was only testing performance without firewalling. I know i will loose a little with firewalling enbled.
                I'm very surprised that network performance under freebsd are lower than under debian.
                Maybe the ethernet driver are not so good under freebsd.
                But i did not expect so awfull performance with pfsense.

                It is a know issue?

                Regards.

                1 Reply Last reply Reply Quote 0
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by Aug 9, 2013, 9:07 AM

                  How did you disable pf?

                  Steve

                  1 Reply Last reply Reply Quote 0
                  • J
                    jpmenil
                    last edited by Aug 9, 2013, 9:16 AM

                    There is a check box under the System/Advanced/Firewall-NAT called "Disable all packet filtering."
                    And when i do a pfctl -d under console, i got pfctl: pf not enabled.
                    So i think it's disabled.

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by Aug 9, 2013, 9:23 AM

                      Fair enough, that seems disabled.
                      Still routing? NATing stuff?

                      Steve

                      1 Reply Last reply Reply Quote 0
                      • E
                        eri--
                        last edited by Aug 9, 2013, 9:30 AM

                        You have to check even the TSO and LRO iirc those get disabled by default on pfSense.
                        Also worth checking the difference between sysct net.isr between FreeBSD and pfSense.

                        While iperf gives performance for stream it does not generalize the different workload requirements.

                        Also you are not testing forwarding performance when running iperf on pfsense itself and you have to consider that.

                        1 Reply Last reply Reply Quote 0
                        • J
                          jpmenil
                          last edited by Aug 9, 2013, 9:34 AM

                          Yes i'm already aware of this.
                          For now it is just a test for test the virtualization.
                          I was thinking that i get the same result with freebsd and pfsense.

                          1 Reply Last reply Reply Quote 0
                          • 7 days later
                          • L
                            labasus
                            last edited by Aug 16, 2013, 6:06 PM

                            You've missed ESXi version and hardware you are using.
                            Is it standalone server or vSphere cluster, do you have all esx host drivers up to date?

                            1 Reply Last reply Reply Quote 0
                            • E
                              eri--
                              last edited by Aug 16, 2013, 6:19 PM

                              One things that seems to help your workload is polling.
                              So enable polling and test with it.

                              Also in pfSense the kern.hz is reduced to 100 when VMware is detected might be worth uping that to same value as FreeBSD.
                              It used to be problematic at the time though if you run with vmware-tools than probably is worth testing that scenario.

                              1 Reply Last reply Reply Quote 0
                              • F
                                foonus
                                last edited by Aug 18, 2013, 4:14 PM Aug 18, 2013, 4:02 PM

                                Not sure if this is related but earlier this week using the daily snapshots my download speed went from 250mbit to 12 when i run through the pfsense box, bypassing the box and hooking directly to cable modem gives 250 again.  Upload speed does not seem to be affected (15mbit with or without pfsense) Only started to see this earlier this week. Disabled IPv6 but problem still persists to this day.

                                1 Reply Last reply Reply Quote 0
                                • R
                                  RootWyrm
                                  last edited by Aug 18, 2013, 8:00 PM

                                  Okay, couple things.

                                  1. Which ESXi -exactly-? Version and build number.
                                  2. Are you running the vmxnet2 as Flexible?
                                  3. Can you please retest with the 'legacy' interface? Preferably in pcn(4) mode over lnc(4) mode. I'm rusty so I forget how to force that behavior. (Hell, I forget if the PCIID changes were committed.)
                                  4. How many em(4) (aka e1000) interfaces are you running during testing? Yes, this matters.

                                  I think part of the problem is that 2.1 pulled in a bad em(4) branch - but I haven't had time to test more in depth.

                                  EDIT: Oh, can you also please check to see if you have "calcru: runtime went backwards" messages?

                                  1 Reply Last reply Reply Quote 0
                                  3 out of 16
                                  • First post
                                    3/16
                                    Last post
                                  Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
                                    This community forum collects and processes your personal information.
                                    consent.not_received