Weird/Poor performance on ESXi using VMXNET3 adapters
I'm having a problem where pfSense on ESXi 7u2 can't push more than half a gigabit through using VMXNET3 adapters inside pfSense with 4 vCPUs, but I can't get gigabit speeds. Only half. The VM isn't close to being maxed out.
I tried disabling Kernel PTI mitigations, disabling various network card offloading options, raising the queues on the VMXNET3 adapters as said on the Netgate Docs, to moving all the cores into a single vSocket. No dice.
I also couldn't force full duplex on the VMXNET3 adapters; only simplex speeds.
I'm using Ookla's speedtest for the tests, but to rule out speedtest, iperf3 running through the pfSense instance has some weird results, I can push full gigabit speeds one way, but I can only establish a connection the other way, no data being transferred. Another test yielded 10-2mb/s both ways.
On the exact same hardware, using Hyper-V, (All I did was rebuild the setup with ESXi instead) I could push gigabit speeds no problem with the same configuration (4vCPUs, Hyper-V synthetic NICs).
Does anyone know any more ideas I could try to hit gigabit speeds on the ESXi setup? Thanks!
EDIT: I don't think this is an issue with ESXi because I could push gigabit speeds with iperf no problem with pfSense as the iperf server.
@thatsysadmin Miss configured NICS in ESXi??
we are also seeing performance issues with ESXI 7.2 and VMXNET3.
Hardware is AMD EPYC 7262, NIC intel X710 via vmxnet2.
Also done a bit of tuning, disabled LRO and TSO in ESXi.
May be an issue with bsd as open sense seems to have similar issues:
Everything seems configured ok. I double checked everything.
Just wondering, are you able to passthrough your network card/SR-IOV it to your VM just to rule out the VMXNET3 drivers?
we tested passthrough of the 10G NIC with almost the same results.
Bare metal on the same machine works fine.
I think this is an issue with esxi and pfsense. We see very high interrupts on pfsense during bandwidth testing on ESXI even if we passthrough the NIC.