[solved] pfSense (2.6.0 & 22.01 ) is very slow on Hyper-V
-
@rmh-0 Thanks, also sovled it in my environment. Happy to be back on normal performace.
-
Because none of this helped in my case, I tried dda a quad-port-intel 1G-NIC into pfSense. This worked, so I was able to use no virtual NICs at all in pfSense. My test Windows-VM is still using a virtual adapter but now has to go through a physical switch every time... with that, everything is working like it should. Now I hope I can dda my 10G-NIC in pfSense too.
-
Just for some additional info:
We run pfSense in HA (carp) on 2 Hyper-V gen2 VM's. Both run on Hyper-V 2022. Both on HPe DL360 Gen10 hardware, both with 2 HPE FlexFabric 10Gb 2-port 556FLR-SFP nics, that are teamed in a VSwitch SET with dynamic loadbalancing on the nics (Switch Embedded Teaming, no LACP). We always disable RSC during the install scripts, as it gave us nothing but troubles. VMQ is not enabled for the pfSense VM's either, as in the past it didn't work with VMQ anyway. For the love of God I cannot replicate this issue of slow intervlan traffic.However, a colleage-company we work with have this issue on their internal network. Now, that is a gen1 VM on a DL360 gen9, with a SET team on the 4 regular internal HPe 331i nics, but with Hyper-V port balancing. They have two Hyper-V nodes (clustered) and their pfSense 2.6.0 VM is on one of those nodes. They have a hardware NAS accessible through SMB. Now, accessing that NAS (different VLAN) is slow as molasses, BUT ONLY when the VM accessing the NAS is on the same host as the pfSense VM. When we move the VM, or pfSense to the other node, poof everything is up to linespeed again. We move the two VM's back together and it's turdspeed again.
When we access a SMB share from physical machines in another VLAN, when we run the VM with the SMB share on the same Hyper-V host as pfSense 2.6, access to that share is super slow. When we move that VM to another host, it all works fine. So for this specific environment it seems like the inter-vlan routing performance is only bad if those VMs reside on the same host as pfSense.
Needless to say it all works fine with 2.5.x.
The one thing I see in common with at least one person in this thread is that the problem pfSense runs on HPe Gen9 hardware. Are there more people running on Gen9 with issues? If you guys use a SET team, what is the load balancing mode?
-
@cannyit you fixed your slow WAN speed but not the inner vlan routing?!
I think it needs two separate threads because we are talking about 2 different problems -
@rgijsen no i was using a lab system (Ryzen based Windows 11 Pro with Hyper-V enabled and 10Gig Asus Nic)
-
This post is deleted! -
@m0nji WAN speed or better speed between all interfaces is fine again after disable RSC. I didnt had an issue with inter VLAN routing. I have vlans tagged within pfsense but speed is OK.
-
Is the VLAN issue only happening on gen 1 VMs?
That was an early theory but I've not seen definite conclusion. -
@stephenw10 said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
Is the VLAN issue only happening on gen 1 VMs?
That was an early theory but I've not seen definite conclusion.It is not VLAN related but related to hn interfaces and or virtual switches I would say.
Just for the record:-
First who encountered it
-
Server 2022 Gen2 VM
-
AMD System
-
WAN-Speed never was affected
-
Even just using virtual adapters on private switch without VLAN it happened
-
Most probably a bug in FreeBSD 12.3 regarding hyper-v
-
-
Mmm, I'd bet it's going to be one of these changes from Mar 2021:
https://github.com/pfsense/FreeBSD-src/commits/RELENG_2_6_0/sys/dev/hyperv/netvscAnd given that one of those enabled RSC support....
-
+1 for Disabling RSC. My setup...
- Home "power user"
- Fitlet2 with three network interfaces
- Windows Server 2019
- Hyper-V Gen 1 VM.
** Three virtual switches (one for each ISP, one for switch)
** 7 network interfaces at VM level (one for each ISP, 5 for 5 VLANs, set with VLAN tagging at virtual NIC level in HyperV)
** All virtual nics with every check box disabled except VLAN (no VMQ, etc)
WAN speed was terrible. Changed software receive side coalescing on the VM switches, and all good.
Commands used:
Check:
Get-VMSwitch | % { $_ | Select-Object *RSC* }Change:
Get-VMSwitch | % { $_ | Set-VMSwitch -EnableSoftwareRsc $false }Re-Check:
Get-VMSwitch | % { $_ | Select-Object *RSC* } -
Yup, that is the main difference, it is not about WAN-Speed in my and some few others cases.
If WAN then RSC.I had disabled RSC today a second time, even rebooted the host, it again didn't helped in my case.
-
Looking at that commit there are some things adds we should be able to check. Some sysctls:
dev.hn.0.rx.0.rsc_drop: 0 dev.hn.0.rx.0.rsc_pkts: 0
And if you boot verbose the logs show:
Feb 24 18:51:42 kernel hn0: hwcaps rsc: ip4 1 ip6 1 Feb 24 18:51:42 kernel hn0: offload rsc: ip4 2, ip6 2
So I wonder if this is somehow not being disabled because it still seems to fit:
It only affects TCP.
It only affects traffic between VMs in the same host.Steve
-
@stephenw10 said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
It only affects TCP.
Confirmed for me. UDP streams were fine. RSC affects only with TCP packets (by definition).
It only affects traffic between VMs in the same host.
In my case, because my traffic comes in on other physcial nics, and those physical nics tie to a physical nic with virtual nics (vlan tagged by hyper-v) and the clients live on those VLANs, it definitely affects traffic outside of the host.
-
Be good to get those values from an affected VM. I grabbed those from Azure which isn't.
Steve
-
@stephenw10 said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
Be good to get those values from an affected VM
I switched RSC back on, booted the VM back up, and verified the problem existed. Here are the ctls (attached).sysctls.txt
I booted verbose, but missed the output. I couldn't find it in the logs. Am I going to have to do a serial console to see them?
-
Nice, so you see traffic in those counters with RSC enabled in the vswitches when the problem exists. Do you stil see data there when RSC is disabled in the switches?
It would be interesting to see if that varies for VMs that are still hitting issues even with RSC disabled.
Steve
-
I still think there's more than one issue. My post several days ago is with a pfSense on hyper-v, gen 2 vm, with no vlans defined in pfSense, vnic, or vswitch, and RSC is disabled. And I had (well, have if I turn the pfSense 2.6 vm back on) the slow performance issue.
Perhaps for hyper-v configuration with native interfaces throughout, RSC is not a factor. Perhaps at best a clue.
For those using hyper-v with vlans and disabling RSC is the fix, that's great, and also a clue.
My mind keeps going back to the troubleshooting where migrating the pfSense VM to another host fixing slow network performance. I'm beginning to think the issue is how FreeBSD is interfacing with the hn driver. In my physical setup, technically my LAN is a directly connected native vSwitch to the hyper-v host. Any device connected to hn1 (in my case) is connected to the same physical hyper-v host. pfSense is routing data between different vNICs which are connected to different vSwitches which are bound to unique physical uplinks.
Unfortunately I haven't time to prove this or packet cap it. This seems OS/Driver related to me.
-
VMQ disabled on all VMs
RSC disabled on Hyper-V HostPS C:\WINDOWS\system32> Get-VMSwitch -Name "Bridged_LAN" | Select-Object *RSC* SoftwareRscEnabled RscOffloadEnabled ------------------ ----------------- False False
fresh booted FreeBSD 12.3 after 120 seconds iperf test (with speed problems)
dev.hn.0.rx.0.rsc_drop: 0 dev.hn.1.rx.0.rsc_drop: 0 dev.hn.2.rx.0.rsc_drop: 0 dev.hn.2.rx.0.rsc_pkts: 0 dev.hn.1.rx.0.rsc_pkts: 0 dev.hn.0.rx.0.rsc_pkts: 321
fresh booted FreeBSD 13.0 after 120s iperf test (speed problem does not exist)
sysctl's do not exist!root@freebsd130:~ # sysctl dev.hn.0.rx.0.rsc_drop sysctl: unknown oid 'dev.hn.0.rx.0.rsc_drop' root@freebsd130:~ # sysctl dev.hn.1rx.0.rsc_drop sysctl: unknown oid 'dev.hn.1.rx.0.rsc_drop' root@freebsd130:~ # sysctl dev.hn.2rx.0.rsc_drop sysctl: unknown oid 'dev.hn.2.rx.0.rsc_drop' root@freebsd130:~ # sysctl dev.hn.0rx.0.rsc_pkts sysctl: unknown oid 'dev.hn.0.rx.0.rsc_pkts' root@freebsd130:~ # sysctl dev.hn.1rx.0.rsc_pkts sysctl: unknown oid 'dev.hn.1.rx.0.rsc_pkts' root@freebsd130:~ # sysctl dev.hn.2rx.0.rsc_pkts sysctl: unknown oid 'dev.hn.2.rx.0.rsc_pkts'
-
@stephenw10 said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
It would be interesting to see if that varies for VMs that are still hitting issues even with RSC disabled.
Shutdown, Disable RCS on virtual switches, Boot, Test (success), Check CTLs... results attached (all 0's - good).
Definitely two different issues.