[solved] pfSense (2.6.0 & 22.01 ) is very slow on Hyper-V
-
Hmm, so just to be clear you are seeing the RSC packets counter increment whether or not you have disabled RSC on the vswitch that interface is connected to?
That seems like a different result to those for whom disabling RSC solved the issue. And seems to support my conjecture...
Unclear what we can do about it though if that is the case. Yet.
Steve
-
@stephenw10 said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
Hmm, so just to be clear you are seeing the RSC packets counter increment whether or not you have disabled RSC on the vswitch that interface is connected to?
True
Unclear what we can do about it though if that is the case. Yet.
I hope you guys figure it out.
-
Ok, so we need more data points here to be sure it's actually what's happening.
But assuming that's true it appears:
There's an issue with the RSC code added in FreeBSD.
In some situations the vswitches in hyper-v do not respect the disable RSC setting.Steve
-
Ok, so it looks like our European friends in fact already hit this because they are actually building on 13-stable and came to the same conclusions. I have opened a bug report: https://redmine.pfsense.org/issues/12873
Steve
-
What do you guys see for these sysctls?:
dev.hn.0.hwassist: 607<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP6_UDP,CSUM_IP6_TCP> dev.hn.0.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.0.ndis_version: 6.30 dev.hn.0.nvs_version: 393217
Please report whether or not you're hitting the issue with the values shown.
-
@stephenw10 Hittin it hard
sysctl dev.hn.0.hwassist sysctl dev.hn.0.caps sysctl dev.hn.0.ndis_version sysctl dev.hn.0.nvs_version
2.5.2 dev.hn.0.hwassist: 1617<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP_TSO,CSUM_IP6_UDP,CSUM_IP6_TCP,CSUM_IP6_TSO> dev.hn.0.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.0.ndis_version: 6.30 dev.hn.0.nvs_version: 327680 2.6.0 dev.hn.0.hwassist: 607<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP6_UDP,CSUM_IP6_TCP> dev.hn.0.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.0.ndis_version: 6.30 dev.hn.0.nvs_version: 393217
Although this time I haven't deactivated RSC in Windows Host.
-
RSC disabled on Hyper-V Host
PS C:\Users\m0nji\Downloads\iperf-3.1.3-win64\iperf-3.1.3-win64> .\iperf3.exe -c 192.168.187.11 -R Connecting to host 192.168.187.11, port 5201 Reverse mode, remote host 192.168.187.11 is sending [ 4] local 192.168.189.10 port 49995 connected to 192.168.187.11 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 55.6 KBytes 454 Kbits/sec [ 4] 1.00-2.01 sec 21.4 KBytes 174 Kbits/sec [ 4] 2.01-3.00 sec 21.4 KBytes 176 Kbits/sec [ 4] 3.00-4.00 sec 21.4 KBytes 175 Kbits/sec [ 4] 4.00-5.00 sec 17.1 KBytes 140 Kbits/sec [ 4] 5.00-6.00 sec 21.4 KBytes 175 Kbits/sec [ 4] 6.00-7.00 sec 21.4 KBytes 175 Kbits/sec [ 4] 7.00-8.01 sec 15.7 KBytes 128 Kbits/sec [ 4] 8.01-9.00 sec 20.0 KBytes 165 Kbits/sec [ 4] 9.00-10.00 sec 21.4 KBytes 175 Kbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 384 KBytes 314 Kbits/sec sender [ 4] 0.00-10.00 sec 237 KBytes 194 Kbits/sec receiver iperf Done.
pfSense 2.6.0 (hitting the issue)
dev.hn.0.hwassist: 607<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP6_UDP,CSUM_IP6_TCP> dev.hn.0.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.0.ndis_version: 6.30 dev.hn.0.nvs_version: 393217 dev.hn.1.hwassist: 607<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP6_UDP,CSUM_IP6_TCP> dev.hn.1.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.1.ndis_version: 6.30 dev.hn.1.nvs_version: 393217 dev.hn.2.hwassist: 607<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP6_UDP,CSUM_IP6_TCP> dev.hn.2.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.2.ndis_version: 6.30 dev.hn.2.nvs_version: 393217
for comparision FreeBSD 12.3 (hitting the issue)
dev.hn.0.hwassist: 17<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP_TSO> dev.hn.0.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.0.ndis_version: 6.30 dev.hn.0.nvs_version: 393217 dev.hn.1.hwassist: 17<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP_TSO> dev.hn.1.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.1.ndis_version: 6.30 dev.hn.1.nvs_version: 393217 dev.hn.2.hwassist: 17<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP_TSO> dev.hn.2.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.2.ndis_version: 6.30 dev.hn.2.nvs_version: 393217
FreeBSD 13.0 (not hitting the issue but obviously not STABLE version)
dev.hn.0.hwassist: 17<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP_TSO> dev.hn.0.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.0.ndis_version: 6.30 dev.hn.0.nvs_version: 327680 dev.hn.1.hwassist: 17<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP_TSO> dev.hn.1.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.1.ndis_version: 6.30 dev.hn.1.nvs_version: 327680 dev.hn.2.hwassist: 17<CSUM_IP,CSUM_IP_UDP,CSUM_IP_TCP,CSUM_IP_TSO> dev.hn.2.caps: 7ff<VLAN,MTU,IPCS,TCP4CS,TCP6CS,UDP4CS,UDP6CS,TSO4,TSO6,HASHVAL,UDPHASH> dev.hn.2.ndis_version: 6.30 dev.hn.2.nvs_version: 327680
-
I figured out an interim solution for me. I created two external Switches, one for pfSense and one for all the other VMs. With that it does work, no slow speed anymore. Drawback is, it is using one more port and everything goes through a physical Switch.
-
So that's no routing between VMs in the same host? That seems like it's what should trigger this.
-
@stephenw10 I think the key point is that pfsense is not using the same vSwitch then the others. This is an untypical setup and no one with a right mind would do it like this, but I did. And it does work here. I think I simulated "having two (vm) hosts", where it is natural, that the VMs can't use the same vSwitch.
-
It feels like coming home, finally.
Still hope for a real fix to that situation some have for the future.
-
Set-VMSwitch -Name "*" -EnableSoftwareRsc $false Get-VMNetworkAdapter -VMName "vmname" | Where-Object {$_.MacAddress -eq "yourmacaddress"} | Set-VMNetworkAdapter -RscEnabled $false
Seems to have worked for restoring both my wan upload and inter vlan throughput. Is there any official guidance on this?
-
@dd Did you downgrade without a reinstall? The one time I forgot to take a snapshot and now I'm left with dialup speeds on my 10G server.
-
@i386dx said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
damn this did the trick right now
Get-VMNetworkAdapter -VMName "vmname" | Where-Object {$_.MacAddress -eq "yourmacaddress"} | Set-VMNetworkAdapter -RscEnabled $false
we were just concentrating of disabling RSC on the vSwitch but there is also a variable for the VMs.
so the workaround should be:- disable RSC support on the vSwitch
Set-VMSwitch -Name "vSwitchName" -SoftwareRscEnabled $false
- disable RSC support on the VM
Get-VMNetworkAdapter -VMName "VMname" | Set-VMNetworkAdapter -RscEnabled $false
one important side note: right now, you have to enable & disable RSC on the VM after every VM reboot even when the value is still $false!
at least on my host... -
-
@m0nji said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
one important side note: right now, you have to enable & disable RSC on the VM after every VM reboot even when the value is still $false!
at least on my host...Yes, same behaviour on my host too (Windows server 2022, gen2 vm, I350-T2 adapter, sr-iov unavailable, vmq disabled).
In my case I could even leave-SoftwareRscEnabled
enabled on the vswitches and just flip-RscEnabled
on and off on the vm network adapter(s) to restore normal bandwidth.Unfortunately I'm not aware if there are ways to enforce nvs version 5 (327680) on the hn driver.
-
Indeed I was hoping there would be some way to force it but I don't see any way.
-
@stephenw10
we should keep an eye on this: https://reviews.freebsd.org/D29075?id=85183 -
I have two Server 2019 hosts (Host1 & Host2) in my homelab, with both the same hardware. Both experienced slow WAN speeds since the update to 2.6. Normally WAN would reach about 1 Gbit, but after the update it was only about 30 Mbit. The servers each have two separate virtual switches for LAN and WAN. Inter-VLAN communication was as expected in 2.6.
The virtual switches are attached as follows:
- WAN vSwitch -> Intel Pro/1000 PT, not shared with host
- LAN vSwitch -> Microsoft Network Adapter Multiplexer Driver -> NIC Team, static teaming, dynamic load balance
On both hosts I did:
Get-VMSwitch | Set-VMSwitch -SoftwareRscEnabled $false
Then I confirmed both servers returned
False
for both switches using:Get-VMSwitch | Select-Object *RSC*
This fixed the issue on Host1. On Host2, now the WAN speeds were OK too, BUT: Inter-VRF traffic became insanely slow. After troubleshooting I found that the only difference was that Host2 was more recently installed and had yet to get a bunch of Windows updates. I couldn't tell you which one it was, but the updates fixed it.
So if you have inter-VLAN performance issues after disabling RSC, try installing Windows updates.
-
-
-
-
-
-
-
-
@drumdevil said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
installing Windows updates.
For anyone who is having throttled uploads. I found a fix.
Go to the your vNIC Properties > Configure > Advanced, and disable "Large Send Offload Version 2" for IPv4/ IPv6, and the upload speed goes back to normal.
I think this also will fix your inter-VLAN performance after disabling RSC.
-
@fiblimitless said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
@drumdevil said in After Upgrade inter (V)LAN communication is very slow (on Hyper-V).:
installing Windows updates.
For anyone who is having throttled uploads. I found a fix.
Go to the your vNIC Properties > Configure > Advanced, and disable "Large Send Offload Version 2" for IPv4/ IPv6, and the upload speed goes back to normal.
I think this also will fix your inter-VLAN performance after disabling RSC.
Cannot confirm this. I still need to enable & disable RSC on the VM after every VM Reboot!
Just for Clarification: I disabled Large Send Offload on the virtual Switch and the physical NIC. But i didnt reboot the host yet.