Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Pfsense with esxi?

    Scheduled Pinned Locked Moved Hardware
    19 Posts 6 Posters 9.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • L
      louis-m
      last edited by

      DOH! finally got it going. i was setting a virtual switch vlan and then setting the vlans on a physical switch.
      everytime i set the vlan in pfsense, it wouldn't communicate.

      quick question. am i better setting the vlan's in:

      1. physical switch & virtual switch (with pfsense just having normal interfaces eg wan, lan1, lan2)
      2. physical switch & pfsense (with virtual switch just having a normal interface)

      I certainly need the physical switch with vlans so the wan and lans can be on the same physical cable.

      1 Reply Last reply Reply Quote 0
      • W
        wallabybob
        last edited by

        @louis-m:

        quick question. am i better setting the vlan's in:

        1. physical switch & virtual switch (with pfsense just having normal interfaces eg wan, lan1, lan2)
        2. physical switch & pfsense (with virtual switch just having a normal interface)

        I expect it will depend on configuration information I don't think you have provided. Also I'm not familiar with the details of what is provided in esxi.

        1. is probably required if other VMs need to share the physical interface used by the pfSense VLANs.

        If not and it is possible in esxi for a VM to have exclusive control of a physical interface then I would grant exclusive access of one of the NICs to the pfSense VM and do all the VLAN work for pfSense in pfSense on the grounds that the next time you have to troubleshoot this it will almost certainly be easier if all the VLAN configuration is in pfSense rather than in pfSense and esxi.

        1 Reply Last reply Reply Quote 0
        • L
          louis-m
          last edited by

          well, i definetely need the physical swithc to be vlan'd to get the wan and lans on the same physical cable.

          i've played about with it a little and it doesn't make much difference to be fair.
          you can either:
          1. use multiple normal interfaces on pfsense eg WAN, LAN1, LAN2, LAN3 and then connect each one to a seperate virtual switch which does the vlans to the phyical switch
          2. use vlans with pfsense and connect them to a seperate (non vlan'd virtual switch) and allow the traffic to be mananged from within pfsense.

          i think it basically depends on where you want to manage your vlans. in my case, i've chosen to do it within pfsense (which would mirror the way you would do it in the physical world)

          1 Reply Last reply Reply Quote 0
          • B
            bdwyer
            last edited by

            In ESXi, it is possible to leave dot1q tagging untouched on the vSwitch, allowing you to configure VLAN's on pfSense as you would running a trunk port to it.  VLAN 4095 is a special case VLAN on ESXi and lets you run trunk's directly into your virtual machines.  This is the feature marcelloc and Xuridisa were referring to.  This would allow you to do the tagging/untagging on pfsense rather than multiple vSwitches.  If you are moving a lot of traffic you should probably compare the performance hit of having pfsense doing the tagging/untagging vs. multiple vswitches and multiple virtual NIC's.  I have often wondered myself where that would best be done in this exact situation.

            CCNP, MCITP

            Intel Atom N550 - 2gb DDR3
            Jetway NC9C-550-LF
            Antec ISK 300-150
            HP ProCurve 1810-24
            Cisco 1841 & 2821, Cisco 3550 x3

            1 Reply Last reply Reply Quote 0
            • L
              louis-m
              last edited by

              i came from an alix and i have noticed a 1.5-2ms longer ping difference on the wan when it's a vm compared to the alix.
              i might try and give it a shot to see if there is a difference between what you say.

              1 Reply Last reply Reply Quote 0
              • C
                cmb
                last edited by

                all my ESX boxes and all the customer ones I've been on, which adds up to a ton, add a very tiny fraction of 1 ms latency. Shouldn't have 1.5-2 ms added by ESX. Especially comparing to an ALIX, generally you're running ESX on vastly faster hardware than a 500 MHz Geode and it actually has less latency through it (though we're still talking small fractions of 1 ms).

                1 Reply Last reply Reply Quote 0
                • L
                  louis-m
                  last edited by

                  yes. my isp does fluctuate slightly so i can't really say 1.5-2ms with a degree of accuracy. when i used the isp's router (thompson) as a bridge, it was about 24-25ms. i switched that to a draytek 110 /alix2d3 which then brought it down to 21.5 - 22ms.
                  switching to a vm has now put it up to about 23.5ms.
                  thanks for the confirmation that it is the norm as i thought i may have had something misconfigured somewhere which may have needed tweaked.
                  just out of curiousity, how have you got on with pfsense being a vm on esxi? does your modem/bridge go directly into your exsi or are you using a managed switch to place the wan and lans on the same physical hardware? i've had no problems and i've been running it about a week now so i'm hopeful it will continue like that.

                  1 Reply Last reply Reply Quote 0
                  • C
                    cmb
                    last edited by

                    Trying to compare latency to anything outside of your network and relating that back to something local to your network isn't reasonable. 1-2 ms differences on a few hops of your ISP's network alone, much less actually going anywhere out on the Internet, are going to happen all the time for a wide variety of reasons. Response time to your WAN IP is the only reasonable way to judge things local to your network, anything beyond that is far too variable to be able to say X changed things by 1-2 ms.

                    1 Reply Last reply Reply Quote 0
                    • L
                      louis-m
                      last edited by

                      can you clarify that for me please? i have a static ip 9*...9 which is set in pfsense but my wan gateway for this is 9*...1.
                      pfsense defaulted to this gateway (9*...1) for the RRD graphs and monitoring. i've not set it to anything else. i assume this is (or any other gateway) is what you should monitor.
                      as above, i have noticed a difference in latency when i've swapped out modems ie when i went from a thomson/alix combinationation to a draytek 110/alix combination it dropped by almost 2ms.
                      going from the draytek 110/alix combination to a draytek 110/esxi combination made the latency increase albeit by approximately the amount you have mentioned.
                      i know that it's a miniscule amount also but i would have thought that it would have stayed the same or decreased due to the elevated hardware ie exsi rather than the alix.
                      is the latency increase down to virtual overheads?

                      1 Reply Last reply Reply Quote 0
                      • C
                        cmb
                        last edited by

                        Monitoring your gateway is fine. If it's something that is generally very steady over long periods of time, like as shown in the quality graph, and only changes when you change things local to your end, then it is probably safe to pinpoint that back to local changes you've made. I didn't realize you were referring to the quality graph over long periods of time, sounded like you pinged things on occasion and were accounting a 1-2 ms change as something you did - even for your gateway you'll commonly see more than 1-2 ms variance from one time of day to another depending on many different factors, but that won't necessarily always be the case. Checking a ping time on occasion is much different than comparing repeated ping history like the RRD graph shows. So you probably do have that kind of difference from going to ESX in that case. Why I don't know, there isn't that much difference generally. Pinging from the physical server this site runs on, through a firewall in ESX, out of ESX up to the datacenter's router, adds 0.2-0.3 ms vs. pinging the LAN IP of the firewall (and has response time in the neighborhood of 0.5 ms, close to what LAN to LAN pings commonly are), and that's nothing more than adding the ~0.2-0.3 ms response time from the firewall's WAN to that router. That's more or less the same as a fully physical network would see, so it's not typical of ESX.

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.