Pfsense lagg to esxi
-
Hi all,
this may not be 100% pfsense issue/related but i try.Scenario 1 is this:- igb0 + igb1 create a LAG1 (failover) in PFsense
- LAG1 connectes to esxi 2xGbit adaptor.
All good by now.ESXi gets dhcp ip after installation and it's reachable.Problem is none of the vm's which are connected to the same vm network are getting dhcp from LAG1 and not even access it.
On the firewall- Rules on LAG1 i let all open for tests.(*****)
Any clue?
Second Scenario:
- set igb0 as management interface and assign ip 172.20.10.1 -> config mgmt interface on esxi with 172.20.10.2
There's no ping/link in between them. - set igb1 as vm's network interface, add dhcp server and create a vm network on the esxi side. none of the vm's get ip's from pfsense
-
"- igb0 + igb1 create a LAG1 (failover) in PFsense"
And what did you do in esxi?? Did you setup the lagg on the interfaces that you connected to what specific vswitch?
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
NIC teaming in ESXi and ESXSo in your 2nd setup you don't have lagg and just connected interface to esxi interface and can not get statics working? Again what vswitch did you put your vmkern what interface in pfsense is on this vswitch with your vmkern which would be the IP esxi is managed on.
What version of esxi are you running, how many interfaces? guessing 2 min. Are you using just vclient or you running vserver are you using just standard vswitches or distributed switches?
Are you directly connecting from your interfaces in pfsense to interfaces in esxi or is there a switch between?
-
for the first setup, yes i used the same KB from vmware to validate the setup. it works even if i use LACP for lagg.
2 uplinks on the vmnetwrok, 1 vmkernel–-
for the second setup i deleted the lag interface and initialized each interface independently on pfsense.
esxi 6, one onbord intel Gbit + 2xbroadcome Gbit interfaces on pci-ex
2 uplinks, one for vm management network (igb0), one connected to vms network (assigned to vm's) - igb1 -
So I can not test your 1st setup without taking down my network. But your vmkern should be easy enough to troubleshoot.
You say you can not ping - can not ping what pfsense IP from esxi or esxi from pfsense? Did you disable the firewall in esxi? What version are you running? Was this an upgrade? Or clean install - you sure on the right nics. Can pfsense the mac? Can esxi see the mac of your pfsense interface?
-
if we speak about the first setup:
LAGG on pfsense -> connected to esxi management netwrok.
none of the vm's connected to the same vm network (where the 2xGbit uplinks are) are getting dhcp ip's. if i use manual ip's same isse.from the vm instace i'm able to ping esxi ip (same netwrok) but not pfsense LAG (also same network) -
what is verry strange for me…is test scenario 2! which is no roket science...simple connection didn't work.
-
Well your doing something wrong then.. Agreed its a no brainer setup.. So your connecting too the wrong interface, driver is not correct on esxi? I am assuming you have validated that you can connect other stuff to the pfsense interfaces, etc.
So you troubleshoot what you did wrong - can the devices see each other mac.. If not then no they are not going to talk to each other. If they can then prob some firewall issue.
-
i suspect some fw shit on esxi. somehow something there's is bloking traffic.
-
it not going to block outbound access, and sure and the hell would not block dhcp, if interface set for dhcp. And firewalls don't block arp, etc. So if pfsense can not see mac of your esxi interface then you have a cable problem, switch problem or interface on the other end is not up or driver not working, etc..
This is all really 101 basic connectivity troubleshooting. Go to pfsense look at the arp table after you try and ping.. You said you set these IPs up static did you mess up mask, pfsense likes to default to /32 for example.
example
[2.3.2-RELEASE][root@pfSense.local.lan]/root: arp -a | grep 192.168.9.40
esxi.local.lan (192.168.9.40) at 00:1f:29:54:17:14 on em1 expires in 93 seconds [ethernet]
[2.3.2-RELEASE][root@pfSense.local.lan]/root:from esxi
[root@esxi:~] esxcli network ip neighbor list
Neighbor Mac Address Vmknic Expiry State Type
–------------------ ----------------- ------ --------- ----- -------
192.168.9.32 b8:27:eb:31:70:ab vmk0 810 sec Unknown
192.168.9.100 18:03:73:b1:0d:d3 vmk0 1178 sec Unknown
192.168.9.7 00:0c:29:f0:74:06 vmk0 1125 sec Unknown
192.168.9.8 00:0c:29:48:2d:09 vmk0 1178 sec Unknown
192.168.9.11 00:0c:29:49:91:eb vmk0 1178 sec Unknown
192.168.9.253 00:50:56:00:00:02 vmk0 15 sec Unknown
192.168.9.252 c0:7b:bc:65:4f:13 vmk0 543 sec Unknown
192.168.9.31 b8:27:eb:1c:6e:09 vmk0 1187 sec Unknownturn off its firewall if you want..
you will notice mine is off, since it really serves no purpose on my private network. Only devices on the network vmkern are my trusted devices managed by me, admin by me, etc. etc. So I just turn it off. Devices on other segments of my network can not talk to my "lan" where pfsense vmkern sits and if they can its to a specific IP on specific port, etc. I allow another segment to talk to say my plex server on port 32400, etc.
Its easy enough to turn off.
[root@esxi:~] esxcli network firewall get
Default Action: PASS
Enabled: false
Loaded: false
[root@esxi:~][root@esxi:~] esxcli network firewall unload
Now no firewall…
-
Thank you for the detailed explanation and time lost with my issue.
I have decided to use a very easy setup:
- one interface for MGMT
- one to get dhcp for vms in esxi
I have created 2 vswitches and 2 vm netwroks linked with the 2 phis interfaces as in the attached screenshots.
still not getting dhcp on vm's


-
stupid question: can this be because from pfsense to esxi i use a direct connection? no sw between those 2?
Had the same before lacp on pfsense (LAG0 with lacp) to centos bond-lacp and worked just fine. -
it works connecting it via a switch.
will keep it that way, still i need to separate the dhcp pool from my LAN.
I have created an aditional dhcp pool (in the same network) but i'm not able to make it use that one only.
like force all requests comming from ESXi to be served from that pool.
Any clue on this?