PFSense loses connectivity with more than 4 interfaces
-
Are you trying to connect more than 1 vnic to the same vswitch? And your tagging the traffic, what do you have set on the vswitch? Are you passing the tags with 4095?
Why would just just not run a vlan on top of the vnic interface already connected to the vswitch. Not sure what would be the point of multiple vnics to the same vswitch… I can try and duplicate it when I get home I currently have 4 vswitches. With a vnic in each, 3 of those tied to physical nics. And one of them is allowing tagging.
I currently just use e1000, because really didn't see much improvement with vmx3 and vmx3 doesn't play will with cdp.. While the e1000 cdp reports correct speed and duplex, etc.
-
Thanks for the note.
You mean why not just send all the tagged packets and have PFSense manage the VLANs itself instead of the vSwitch? In doing research on this, the performance gains by using VST instead of VGT seem significant, as the vswitch uses hardware assist in switching that the guest doesn't so its a lot more overhead to do that way. I have a 10Gbps network between a NAS, another VM Host, etc…, so I wanted to make sure inter-vlan routing would work as efficiently as possible.
In fact, the recommended configuration for maximum performance using SR-IOV support in the 10G interfaces I have is to virtualize at the VM driver level, and operate in EST mode, where you pass multiple virtualized interfaces and each of them shows up in the guest through VT-D passthrough. See here: http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/converged-network-adapter-sr-iov-on-esxi-5-1-brief.html and here: https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.networking.doc/GUID-EE03DC6F-32CA-42EF-98FC-12FDE06C0BE0.html This was too much complexity for me, so I settled with taking the hit and not using SR-IOV, which should be OK with just a few 10Gbps interfaces, at least I hope so. If not I can always go back and turn that on and redo the network configuration.
Thanks!
Mike -
Where exactly do you think the intervlan routing is happening where is your SVI for each vlan?
What is the setting on your vswitch. And what is the actual setting of the vnics on your pfsense.
If you want to create multiple vnics in pfsense that are in their own vlan vs putting multiple vlans on the same interface. Why would you not connect them to different port groups on the vswitch with specific TAG for that traffic.. And and then there would be a trunk on the physical nic.
I could for sure do that and test with creating multiple vnics… But I would connect them to different port groups on the same vswitch.
-
ESX is probably changing your interface order upon adding an additional NIC. It has an annoying issue of some sort that does so. Make note of the MACs and their associated interfaces in the VM settings, and match them up with what ifconfig's output shows for the MACs.
-
Where exactly do you think the intervlan routing is happening where is your SVI for each vlan?
What is the setting on your vswitch. And what is the actual setting of the vnics on your pfsense.
If you want to create multiple vnics in pfsense that are in their own vlan vs putting multiple vlans on the same interface. Why would you not connect them to different port groups on the vswitch with specific TAG for that traffic.. And and then there would be a trunk on the physical nic.
I could for sure do that and test with creating multiple vnics… But I would connect them to different port groups on the same vswitch.
The intervlan routing should be done in PFsense. Each interface is in a different port group. In my case, instead of (4095) as the VLAN ID, it's 10 or 20 or 30 etc… for each interface.
Thx
Mike -
@cmb:
ESX is probably changing your interface order upon adding an additional NIC. It has an annoying issue of some sort that does so. Make note of the MACs and their associated interfaces in the VM settings, and match them up with what ifconfig's output shows for the MACs.
Ah, that would explain a lot! If true it would be a real defect in VMware I think. Is the an open bug with them on this issue? This should be noted in the PFSense documentation for ESXi deployment.
I will go back and do exactly as you say, and I should be able to remap them on the console after adding them all at once.
thanks!
Mike -
Ok so you are on their port groups, that makes sense..
I have not run into any sort of reorder issue of interfaces.. But then again I don't go adding lots of interfaces after setup.. When I setup the pfsense vm, I give all of the vnics a specific mac so I know for sure which one is which when looking in the pfsense console.
-
I confirm that pfSense is reassigning the NICs randomly if you add them after the initial setup.
My initial configuration was quite simple:
1 esxi 5.1 host free license
1 pfSense 2.2.3 with 2x VMXNET3 for WAN and LANIn order to provide my customers with more services I felt the need for 6 extra DMZ networks
The setup is now:
vmx0 - WAN
vmx1 - LAN
vmx2 - PBX DMZ
etc..After powering on the pfSense VM I had the bad surprise to discover that I had no connectivity outside my network. Hopefully I didn't lost connectivity with the esxi management interface and I could reassign the NICs using their MAC addresses from the pfSense console.
I don't know if this is due to virtualization. In my opinion no, the result would probably be the same with a physical pfSense.
One funny thing though; the problem occurred only when I was adding the 6 extra NICs at the same time. I stopped the VM, removed the 6 extra NICs and added only 2 extra NICs at a time and pfSense didn't reassigned the NICs randomly which made me think at a hidden license limitation :o in the first place. But it's not ;D.
So if you need to add extra network interfaces to your virtual pfSense don't do that remotely as you will probably lose connectivity.
-
I confirm that pfSense is reassigning the NICs randomly if you add them after the initial setup.
No, ESX is reassigning the NICs randomly. The guest has no impact or control over the presented order of the NICs.
I don't know if this is due to virtualization. In my opinion no, the result would probably be the same with a physical pfSense.
You'd be wrong.
If you add physical NICs to a physical box, it might change the ordering, but do so in a sensible and predictable manner. Have igb0 and igb1, and add a 4 port Intel gig NIC, and depending on the motherboard it might present the add-in NIC as the first on the PCI bus. So the add-in card becomes igb0-3, and the onboard igb0 is now igb4 and igb1 now igb5.
-
ouch ::)
Thanks cmb,
You're definitely right about how pfsense scans the pci bus to list the devices. I think I was a little tired yesterday ;D. I can't belive I wrote such an assuption. :o
-
@headhunter_unit23
Still Happens in 2024 on esx 8.1 and 3 months old month pfsense install.
when I added a 4th interface everything went down and had to reassign the interfaces from console and lost all nat configuration. Very weird. My Wan was vmx0 and all of a sudden is vmx1 (unless is the only one). To be sure I had to configure the dhcp servers on each if and switch connection to find out what ip my machine was getting.Licensing might be a good guess still because I just updated the esx trial version.
-
@lonblu said in PFSense loses connectivity with more than 4 interfaces:
@headhunter_unit23
Still Happens in 2024 on esx 8.1 and 3 months old month pfsense install.
when I added a 4th interface everything went down and had to reassign the interfaces from console and lost all nat configuration. Very weird. My Wan was vmx0 and all of a sudden is vmx1 (unless is the only one). To be sure I had to configure the dhcp servers on each if and switch connection to find out what ip my machine was getting.Licensing might be a good guess still because I just updated the esx trial version.
It's a known behavior of ESX and how it probes interfaces.
See https://forum.netgate.com/post/687896 for some more info
There are allegedly some ways to work around that but it's ultimately an issue that vmware needs to solve in its hardware emulation.