pfSense VM with LACP teams and SMB MultiChannel issue
Hi. We're running pfSense 2.4.5-RELEASE-p1 (amd64) in HA on Hyper-V 2019. The master one has a dedicated Hyper-V host. Unfortuantely, it's hardware's (HPe DL360 gen8) raid controller was not supported else that might have been a physical machine rather than a VM. Anyway, as of yet, we only have Gbit connections. All our hosts have one or two LACP team to a stack of rack-switches, both for redundancy as increased bandwith. The pfSense host as well, it has two LACP teams, one for the internal VLANs and one for the uplink to the internet (yeah I know it's a 'classic' setup).
Usually with teams, with one stream / file copy you won't go past the linkspeed for an individual link in the team. Windows Server 2012 and up introduced SMB3 which has MultiChannel. And that benefits from teams, splitting the transfer over multiple links, which in the end gives you more performance than just one of the links in the team.
Now, when I copy files (all Server 2019 VM's) from a VM to another VM which is on another host, I actually use a good chunk of bandwidth for both links, ie. about 1.6 - 1.7Gbps consitently.
Now for the issue: if this traffic goes to another subnet, hence passing PfSense, I can't get passed exactly 1Gbps, ie linkspeed. pfSense box has plenty of CPU left for processing. As said the pfSense host has 2 teams, and the pfSense vm has 2 virtual nics, each connected to one of the teams (through a vSwitch of course). pfSense sees the NICS as 10Gbps links.
When I lookup the SMB MultiChannel stats, it actually shows 2Gbps of traffic, and it sees both client and server have two links in this case, which means in terms of firewall rules all is well (it only needs the default 445).
So has anyone else run across this? I'd expect pfSense being 'just a firewall' in between, and the Windows VM's building up the connections, to just work through the firewall. Anyone with a clue?