A solved same problem by tunring on "Disable hardware TCP segmentation offload" and "Disable hardware large receive offload" options in System->Advanced->Networking
Update: Found the culprit, something around the traffic shaper is messing it up, when I remove the traffic shaper rules, it went up to 315/110. But when I introduce traffic shaper speed limit (even if the upload is set at 100), the upload speed goes down to 60.
Very odd, considering how my traffic shaper settings are identical on both routers. I’ll play around with the traffic shaper.
@stephenw10 If my previous Proxmox setup I had used passthrough to pass the full PCI device to the VM for pfsense. I reconfigured yesterday and created bridges instead as per https://docs.netgate.com/pfsense/en/latest/recipes/virtualize-proxmox-ve.html . Upgraded to 2.6 without issue. Much better way of doing it. Now all updated and running as quick as if it were hardware.
I'll go ahead an answer my own question here: the issue was apparently driver related. On my third time installing pfSense, I manually uninstalled and reinstalled an updated version of the Intel PRO/1000 PT driver. After that, it worked without any issues exactly how I wanted it to. I'm not sure why that would be the case as even the updated driver was only slightly newer, but either way it works and is invisible to the host OS, which is all I need. I'm getting full gigabit speeds both ways with only 2ms more latency than I had before. I'm running Snort.
@raqib But it is not helping in every case, I had problems on Server 2022 and this only helped some of the time.
What "helped" me all of the time was using two separate, external vSwitches, one only for pfSense and one for all the other VMs. And that meant there had to be a physical switch in place to connect those two vSwitches. Thankfully it is working in the latest Plus-version.
I'm having this same problem in ESXi 6.5 with standard vSwitches. Same Duplex issues two different VMware clusters. Seeing it in the cisco logs because LLDP/CDP is turned on. CARP seems to work just fine for me after enabling promiscuous mode in the vSwitch. PFSense 2.6.0. Intel 82599 NIC. I can see this in the logs on our Cisco 6509 and Nexus 5K switches depending which hypervisor is running the VM. 6509s connect to the hypervisors with a standard LACP port channel, the Nexus switches are a vPC LACP bond. I also do not have any other gear throwing these errors. I can see this issue on both stand alone and clustered PFSense VMs.
Not to a VM. Not yet. However I tested the upgrade process over the weekend and it completed no problem so I would expect a clean install of 2.6 then upgrade to 22.01 and then 22.05 to work fine.
Adding new hardware devices to the VM, such as NICs, will likely change the NDI and hence the subscription. Cores or RAM should not.
I suggest adding as many NICs as you might need before you start. You can always simply leave them disabled in pfSense or even disconnected in Proxmox.