Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Weird/Poor performance on ESXi using VMXNET3 adapters

    Scheduled Pinned Locked Moved Virtualization
    6 Posts 3 Posters 1.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T
      thatsysadmin
      last edited by thatsysadmin

      I'm having a problem where pfSense on ESXi 7u2 can't push more than half a gigabit through using VMXNET3 adapters inside pfSense with 4 vCPUs, but I can't get gigabit speeds. Only half. The VM isn't close to being maxed out.

      I tried disabling Kernel PTI mitigations, disabling various network card offloading options, raising the queues on the VMXNET3 adapters as said on the Netgate Docs, to moving all the cores into a single vSocket. No dice.

      I also couldn't force full duplex on the VMXNET3 adapters; only simplex speeds.

      I'm using Ookla's speedtest for the tests, but to rule out speedtest, iperf3 running through the pfSense instance has some weird results, I can push full gigabit speeds one way, but I can only establish a connection the other way, no data being transferred. Another test yielded 10-2mb/s both ways.

      On the exact same hardware, using Hyper-V, (All I did was rebuild the setup with ESXi instead) I could push gigabit speeds no problem with the same configuration (4vCPUs, Hyper-V synthetic NICs).

      Does anyone know any more ideas I could try to hit gigabit speeds on the ESXi setup? Thanks!

      EDIT: I don't think this is an issue with ESXi because I could push gigabit speeds with iperf no problem with pfSense as the iperf server.

      Cool_CoronaC 1 Reply Last reply Reply Quote 0
      • Cool_CoronaC
        Cool_Corona @thatsysadmin
        last edited by

        @thatsysadmin Miss configured NICS in ESXi??

        T 1 Reply Last reply Reply Quote 0
        • P
          posto587
          last edited by posto587

          Hi
          we are also seeing performance issues with ESXI 7.2 and VMXNET3.

          Hardware is AMD EPYC 7262, NIC intel X710 via vmxnet2.
          Also done a bit of tuning, disabled LRO and TSO in ESXi.

          May be an issue with bsd as open sense seems to have similar issues:
          https://forum.opnsense.org/index.php?topic=18754.105

          Regards

          T 1 Reply Last reply Reply Quote 0
          • T
            thatsysadmin @Cool_Corona
            last edited by

            @cool_corona
            Everything seems configured ok. I double checked everything.

            1 Reply Last reply Reply Quote 0
            • T
              thatsysadmin @posto587
              last edited by

              @posto587
              Just wondering, are you able to passthrough your network card/SR-IOV it to your VM just to rule out the VMXNET3 drivers?

              P 1 Reply Last reply Reply Quote 0
              • P
                posto587 @thatsysadmin
                last edited by

                @thatsysadmin

                we tested passthrough of the 10G NIC with almost the same results.

                Bare metal on the same machine works fine.

                I think this is an issue with esxi and pfsense. We see very high interrupts on pfsense during bandwidth testing on ESXI even if we passthrough the NIC.

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post
                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.