How to use pfSense's DHCP Server within a vSphere/ESXi internal network?
-
Why are you using directpath if what you want to run is a esxi host.. Let esxi handle your nics and ..
"But the thing is none of the VMs can resolve their IPs via the DHCP and I wondered if I was missing something.."
What are you trying trying to say here, dhcp clients do not get an IP from dhcp server. Does your dhcp server see the discover?? If your dhcp clients and or server are not on the same layer 2 then no your not going to get dhcp to work unless you have setup a relay.
Please draw how you have your network configured and connected. this is really clickity clickity. If you put pfsense vnic on the same vswitch as your other vm with a vnic on the same vswitch then they are in the same layer 2 and dhcp works just fine.
Once you start moving the traffic to physical world not sure what your doing. I have been running pfsense on esxi for many years, never had any issues with dhcp or connectivity at all. pfsense provides dhcp for both virtual machines and physical machines.
So attached you will see esxi network. All of those physical nics other than vmnic3 are connected to a smart switch with vlans setup. Vmnic3 connected direct to my cable modem and provides the pfsense wan vnic its public IP via dhcp from my isp.
You will notice a vmkern connect to vmnic1, this same layer 2 network as LAN that is connected to real world with vmnic2. I break out vmkern to different vswitch for performance. As you can see there are other vm's on this vswitch, and lots of physical devices on that network that use pfsense for dhcp.
The there is wlan vswitch connected to real world with vmnic0, this vswitch passes vlan traffic to pfsense since there is untagged vlan my 192.168.2/24 network and then multiple vlans both wired and wireless that pass through here. Notice the vswitch is set to 4095 for the vlan ID.. Then I have my dmz vswitch, this is not tied to the physical world with any nic, only way for these vms to talk to the real world is routed through pfsense.
You will see 2nd picture is the nics I have in the esxi host.
So please post up how your connected and configured and we can walk through what your doing wrong.
-
Why are you using directpath if what you want to run is a esxi host.. Let esxi handle your nics and ..
Well I was thinking I'd get better performance out of the NICs if I set it up this way and I mean there is nothing speaking against passing through NICs, right?
What are you trying trying to say here, dhcp clients do not get an IP from dhcp server. Does your dhcp server see the discover?? If your dhcp clients and or server are not on the same layer 2 then no your not going to get dhcp to work unless you have setup a relay.
Yeah the VMs can't get an IP from the DHCP server, but if I set static IPs everything is working fine. I haven't checked if the DHCP server could see the discover, but I'm sure that the VMs are all on the same layer.
Here is my config:
-
"if I set it up this way and I mean there is nothing speaking against passing through NICs, right?"
Other than can not connect them to a vswitch. And alot of extra work for really nothing.. How much extra performance you think your going to get out of your gig interface? Or another way home of a hit do you think its going to be?? How do you leverage any of those nics for vms to be on those networks? It pretty pointless to have 4 nics and not let your esxi host use them. Only 1 vm.. I don't see really what would be the point..
Where exactly are those vms going to get a dhcp from? They are not tied to the physical world.. So you have a vm port group shared with your vmkern port group on the same interface.. But your worried about performance ;) ok heheheh
What is your pfsense box in that picture - the window vm?
-
Other than can not connect them to a vswitch. And alot of extra work for really nothing..
There is no need to connect them to a vSwitch since I just got this network card for pfSense, I actually only needed two NICs but since I got this quad port card for 40€ bucks I thought why not.
For anything else there are still two onboard NICs and since the network card has actually two chipsets I could remove them from the DirectPath list and use them with ESXi if I ever needed.Where exactly are those vms going to get a dhcp from? They are not tied to the physical world.. So you have a vm port group shared with your vmkern port group on the same interface.. But your worried about performance ;) ok heheheh
I don't get what you are trying to say. I just want a separate internal network between my VMs to have a faster way to transfer data between VMs internally, every VM also has another NIC which uses the vSwitch0 which has the physical NIC, but that's not really relevant for the problem.
This is not my LAN interface, the LAN interface is physical and that's how it's intended, the VMs should get the IP from the DHCP server of the pfSense VM (gyro) through the internal network, store is my FreeNAS instance and window is a Windows VM.I just want to have dynamic IPs on the vSwitch1 so I don't have to assign them manually for every VM, vSwitch1 doesn't net access to the internet or anything else, it's like I said just for internal data transfer.
-
And what is providing dhcp to that vswitch?? Where is the pfsense vm that has dhcp running on the vnic on that vswitch.
" every VM also has another NIC which uses the vSwitch0"
What a pointless setup… And where you posted your vswitch0 and the only vm on it is one called windows.
You do understand that if vm's are talking to each other an connected to the same vswitch that traffic never enters the physical world. There is ZERO point to multihoming your vms with a vswitch connection and then a passed through nic to them.. Its not getting you anything other than convoluted mess.
Connect your vms to a vswitch.. Assign a nic to that vswitch that connects it to the physical world = done. vms talking to vms never enter the physical world. When it needs to talk to the real world it will pass through the vswitch to the real nic to your realworld network..
Putting a vm port group on the same as your vmkern port group is a performance HIT! Is pointless to do that, especially when you have real nics coming out your ass to spare..
-
And what is providing dhcp to that vswitch?? Where is the pfsense vm that has dhcp running on the vnic on that vswitch.
" every VM also has another NIC which uses the vSwitch0"
I already said it, it's the VM named gyro and it's within the Internal Network, just look at the screenshot…
What a pointless setup… And where you posted your vswitch0 and the only vm on it is one called windows.
Because I haven't set my other VMs yet??
You do understand that if vm's are talking to each other an connected to the same vswitch that traffic never enters the physical world. There is ZERO point to multihoming your vms with a vswitch connection and then a passed through nic to them.. Its not getting you anything other than convoluted mess.
I think you don't get what I'm trying to do… And there is because I have also physical machines in that network which need to connect to the VMs, could you rather not say how pointless my setup is please? Because it isn't and I just want to get my problem solved..
-
If pfsense is gyro and your other 2 vms are wanting dhcp.. On that nic connected to that vswitch then they should be getting dhcp. My guess is you have dhcp on the wrong interface or vm is asking for the dhcp on its wrong interface.
I would suggest you sniff on pfsense gyro vnic or just look in your dhcp log.. Are you seeing discover packets?? If not then no dhcp not going to work.
As to your setup being pointless - sorry but the way you have explained it, it is!! The performance increase from putting your nic passthru to your pfsense is going to measured at the minuscule scale if at all. So that is in itself pointless. Giving a vm multihomed interfaces in multiple networks.. Because you don't want to pass that traffic over the same vswitch?? That also talks to the physical world?
If what your about is performance - sharing port group with your vmkern port group is a huge performance hit for moving stuff to and from your datastore, etc.
As to what your trying to do.. I agree I don't get it.. Think of vswitch as just any other normal physical switch.. Its uplink is the physical nic you connect.. If machine A is talking to machine B on the same switch with unicast traffic that traffic doesn't go anywhere other than the 2 ports on that switch. So not sure what you think you need to connect your vms to multiple vswitches for??
You do understand if your using vmxnet3 interface they are 10ge interfaces. So your vm talking to your other vm on the same vswitch is not going to have any issues talking to each other while also talking to the real world at 1ge that is the uplink.. As long as your vm host can handle it. Creating extra vswitches doesn't help anything at all other than make for confusion on what is connected to what using what, etc.
-
You do understand if your using vmxnet3 interface they are 10ge interfaces. So your vm talking to your other vm on the same vswitch is not going to have any issues talking to each other while also talking to the real world at 1ge that is the uplink.. As long as your vm host can handle it. Creating extra vswitches doesn't help anything at all other than make for confusion on what is connected to what using what, etc.
This, this is the one thing I didn't know.. :o In this case my plan of having a separate internal network really is pointless. I'm sorry for all the misunderstandings, I just didn't know that. :-[
And if the difference in performance really is that miniscule then there wouldn't even have been a need for the network card since my motherboard has two NICs.
But I'm really curious in how small the difference in performance is, you have any benchmarks or something like that at hand? -
Here, is a bit dated..
http://blogs.vmware.com/performance/2010/12/performance-and-use-cases-of-vmware-directpath-io-for-networking.html
Are you planning on doing upwards of 50K Packets Per second? If so ok then maybe you need to save some cpu cycles and throw out all the benefits of just letting esxi handle the nics and give your vms virtual nics.
To be honest if you were setting up a network that would come close to moving that many packets over a vm infrastruce you sure an the hell would not be buying your nics on ebay for $40 nor would you be here asking questions ;) If you were you got promoted into the wrong job or lied like crazy on your resume and not going to last in your position for more than a couple of weeks ;) hehehe
To be honest the only real reason I could think of direct io for normal setups would be if for whatever reason you had a nic that esxi did not support, but your vm OS did.. Then sure you could pass that through to use that nic.. But you loose a lot of advantages this way, and that would be a work around. You would be better off spending a couple of $ and getting a nic that esxi supports so you could use it how really intended to be used when setting up esxi hosts.
As to more nics - its is good thing. Well worth a $40 cost.. I would suggest you break out your vmkern to its own nic. This will for sure give you a performance increase moving files to and from your datastore from your physical network.
Having more nics allow you to create more networks and not have to vlan and hairpin connections which allows you better performance, etc. I have 4 physical nics in mine, I would much rather have a couple of more. You see my wlan vswitch is doing vlan tagging all sharing the same physical nic. If I had more could break those out to their own physical nics.. Prob not all that big of deal because that traffic is mostly wifi. But makes for simpler setup and for sure no hairpins, etc.
-
Are you planning on doing upwards of 50K Packets Per second? If so ok then maybe you need to save some cpu cycles and throw out all the benefits of just letting esxi handle the nics and give your vms virtual nics.
To be honest if you were setting up a network that would come close to moving that many packets over a vm infrastruce you sure an the hell would not be buying your nics on ebay for $40 nor would you be here asking questions ;) If you were you got promoted into the wrong job or lied like crazy on your resume and not going to last in your position for more than a couple of weeks ;) hehehe
No I don't, this is just for my home network. I've also never really worked with ESXi before that much, so I'm pretty much a newbie mostly. I've used Proxmox before but when I decided to set up my own router with pfSense I thought I'd switch to ESXi because I thought a baremetal hypervisor would feel better, which it does, no more annoying kernel updates.
There is obviously a lot I have to learn now. ;D But I'm doing it to learn something and it's a fun hobby for me. I'm not employed as network admin of course, I'm just a software developer. :DAs to more nics - its is good thing. Well worth a $40 cost.. I would suggest you break out your vmkern to its own nic. This will for sure give you a performance increase moving files to and from your datastore from your physical network.
Having more nics allow you to create more networks and not have to vlan and hairpin connections which allows you better performance, etc. I have 4 physical nics in mine, I would much rather have a couple of more. You see my wlan vswitch is doing vlan tagging all sharing the same physical nic. If I had more could break those out to their own physical nics.. Prob not all that big of deal because that traffic is mostly wifi. But makes for simpler setup and for sure no hairpins, etc.
Yeah I've also thought about using the NIC for more networks, I wanna separate my WiFi and the devices in my living room, so I'll do that. For now I've disabled DirectPath for all the NICs and I'm now using a similar configuration like yours, it works and the speed is fine so I'm happy with that result, thanks for your help and patience! :)