Pfsense with esxi?
-
thanks guys.
could anybody comment on this setup and confirm it's ok?modem > switch port 1 (untagged member of vlan30)
switch port 2 (tagged member of vlan30) to pfsense WAN (vlan30)
pfsense LANS (vlan40, vlan41, vlan42) connect back to switch port 2 which is tagged member of vlan40,41,42
so basically switch port 2 would be running the WAN (vlan30) down and LAN's (vlan40,41,42) up
is this safe?
-
In my oppinion, yes.
did you installed vm-tools do change network driver to vmx?
-
not yet. i can't seem to get it going.
i create a pfsense vm and assign it 2 x virtual nic on vswitch2
vnic1 = em0_vlan30 = public IP
vnic2 = em1_vlan40 = 192.168.40.1/24i can't seem to get my manangement network vswitch0 with vlan40 to speak to vswitch2 with vlan40
any ideas?
-
There is an option on esx to tag all vlans from switch to virtual switch. As I don't remember what is that option, try to search the forum or vmware site.
You can do this setup with only one interface too.
If your wan vlan is just for modem and esx, you can untag it for vmware port too.Keep in mind that you can't use tag and untagged vlans on the same port.
-
DOH! finally got it going. i was setting a virtual switch vlan and then setting the vlans on a physical switch.
everytime i set the vlan in pfsense, it wouldn't communicate.quick question. am i better setting the vlan's in:
1. physical switch & virtual switch (with pfsense just having normal interfaces eg wan, lan1, lan2)
2. physical switch & pfsense (with virtual switch just having a normal interface)I certainly need the physical switch with vlans so the wan and lans can be on the same physical cable.
-
quick question. am i better setting the vlan's in:
1. physical switch & virtual switch (with pfsense just having normal interfaces eg wan, lan1, lan2)
2. physical switch & pfsense (with virtual switch just having a normal interface)I expect it will depend on configuration information I don't think you have provided. Also I'm not familiar with the details of what is provided in esxi.
1. is probably required if other VMs need to share the physical interface used by the pfSense VLANs.
If not and it is possible in esxi for a VM to have exclusive control of a physical interface then I would grant exclusive access of one of the NICs to the pfSense VM and do all the VLAN work for pfSense in pfSense on the grounds that the next time you have to troubleshoot this it will almost certainly be easier if all the VLAN configuration is in pfSense rather than in pfSense and esxi.
-
well, i definetely need the physical swithc to be vlan'd to get the wan and lans on the same physical cable.
i've played about with it a little and it doesn't make much difference to be fair.
you can either:
1. use multiple normal interfaces on pfsense eg WAN, LAN1, LAN2, LAN3 and then connect each one to a seperate virtual switch which does the vlans to the phyical switch
2. use vlans with pfsense and connect them to a seperate (non vlan'd virtual switch) and allow the traffic to be mananged from within pfsense.i think it basically depends on where you want to manage your vlans. in my case, i've chosen to do it within pfsense (which would mirror the way you would do it in the physical world)
-
In ESXi, it is possible to leave dot1q tagging untouched on the vSwitch, allowing you to configure VLAN's on pfSense as you would running a trunk port to it. VLAN 4095 is a special case VLAN on ESXi and lets you run trunk's directly into your virtual machines. This is the feature marcelloc and Xuridisa were referring to. This would allow you to do the tagging/untagging on pfsense rather than multiple vSwitches. If you are moving a lot of traffic you should probably compare the performance hit of having pfsense doing the tagging/untagging vs. multiple vswitches and multiple virtual NIC's. I have often wondered myself where that would best be done in this exact situation.
-
i came from an alix and i have noticed a 1.5-2ms longer ping difference on the wan when it's a vm compared to the alix.
i might try and give it a shot to see if there is a difference between what you say. -
all my ESX boxes and all the customer ones I've been on, which adds up to a ton, add a very tiny fraction of 1 ms latency. Shouldn't have 1.5-2 ms added by ESX. Especially comparing to an ALIX, generally you're running ESX on vastly faster hardware than a 500 MHz Geode and it actually has less latency through it (though we're still talking small fractions of 1 ms).
-
yes. my isp does fluctuate slightly so i can't really say 1.5-2ms with a degree of accuracy. when i used the isp's router (thompson) as a bridge, it was about 24-25ms. i switched that to a draytek 110 /alix2d3 which then brought it down to 21.5 - 22ms.
switching to a vm has now put it up to about 23.5ms.
thanks for the confirmation that it is the norm as i thought i may have had something misconfigured somewhere which may have needed tweaked.
just out of curiousity, how have you got on with pfsense being a vm on esxi? does your modem/bridge go directly into your exsi or are you using a managed switch to place the wan and lans on the same physical hardware? i've had no problems and i've been running it about a week now so i'm hopeful it will continue like that. -
Trying to compare latency to anything outside of your network and relating that back to something local to your network isn't reasonable. 1-2 ms differences on a few hops of your ISP's network alone, much less actually going anywhere out on the Internet, are going to happen all the time for a wide variety of reasons. Response time to your WAN IP is the only reasonable way to judge things local to your network, anything beyond that is far too variable to be able to say X changed things by 1-2 ms.
-
can you clarify that for me please? i have a static ip 9*...9 which is set in pfsense but my wan gateway for this is 9*...1.
pfsense defaulted to this gateway (9*...1) for the RRD graphs and monitoring. i've not set it to anything else. i assume this is (or any other gateway) is what you should monitor.
as above, i have noticed a difference in latency when i've swapped out modems ie when i went from a thomson/alix combinationation to a draytek 110/alix combination it dropped by almost 2ms.
going from the draytek 110/alix combination to a draytek 110/esxi combination made the latency increase albeit by approximately the amount you have mentioned.
i know that it's a miniscule amount also but i would have thought that it would have stayed the same or decreased due to the elevated hardware ie exsi rather than the alix.
is the latency increase down to virtual overheads? -
Monitoring your gateway is fine. If it's something that is generally very steady over long periods of time, like as shown in the quality graph, and only changes when you change things local to your end, then it is probably safe to pinpoint that back to local changes you've made. I didn't realize you were referring to the quality graph over long periods of time, sounded like you pinged things on occasion and were accounting a 1-2 ms change as something you did - even for your gateway you'll commonly see more than 1-2 ms variance from one time of day to another depending on many different factors, but that won't necessarily always be the case. Checking a ping time on occasion is much different than comparing repeated ping history like the RRD graph shows. So you probably do have that kind of difference from going to ESX in that case. Why I don't know, there isn't that much difference generally. Pinging from the physical server this site runs on, through a firewall in ESX, out of ESX up to the datacenter's router, adds 0.2-0.3 ms vs. pinging the LAN IP of the firewall (and has response time in the neighborhood of 0.5 ms, close to what LAN to LAN pings commonly are), and that's nothing more than adding the ~0.2-0.3 ms response time from the firewall's WAN to that router. That's more or less the same as a fully physical network would see, so it's not typical of ESX.