Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Pfsense 2.1.3 + ESXi 5.5 = reboot after every shutdown of pfsense needed

    Scheduled Pinned Locked Moved Virtualization
    25 Posts 9 Posters 7.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H
      heper
      last edited by

      perhaps there is an issue with passthrough & freebsd8.3 & esxi.

      Since current development is focussed on freebsd 10, I wouldn't get my hopes up, that this gets fixed soon.
      Freebsd has para-virtualized driver support builtin, and thus would no longer require the use of legacy stuff. Performance should go up dramatically.

      on that hardware getting around 1gbit/s with the legacy virtual drivers should not be a problem. ( i run around 10 boxes on similar hardware)
      my advice: stop using the passthrough and go legacy-virtual ;)

      1 Reply Last reply Reply Quote 0
      • D
        devianceluka
        last edited by

        While we're hopefully still searching for an answer… Out of the subject is pfsense2.2 in its current state considered stable and secure as 2.1.3?

        1 Reply Last reply Reply Quote 0
        • johnpozJ
          johnpoz LAYER 8 Global Moderator
          last edited by

          Yeah you only have 120mbit connection - there is not going to be an issue using that with virtual nic vs passthru..  Your thinking is wrong on the performance..  If your worried about virtual performance - then you shouldn't be running virtual at all.  Since you can not get over the mindset of using it the way its designed.

          My file storage is box is VM, its nic (vmxnet3) is vm connected to the vswitch that connects to the physical world with cheap nics, on a N40L box..  And I get great performance to and from the real world network.

          here is my workstation to my VM storage box.

          –----------------------------------------------------------
          Client connecting to storage.local.lan, TCP port 5001
          TCP window size:  256 KByte

          [344] local 192.168.1.100 port 52507 connected with 192.168.1.8 port 5001
          [ ID] Interval      Transfer    Bandwidth
          [344]  0.0-10.0 sec  1.06 GBytes  912 Mbits/sec

          Why should I be worried about performance on that??

          phy box - switch – phy nic (N40L) -- vswitch - vm nic (storage vm)

          An intelligent man is sometimes forced to be drunk to spend time with his fools
          If you get confused: Listen to the Music Play
          Please don't Chat/PM me for help, unless mod related
          SG-4860 24.11 | Lab VMs 2.8, 24.11

          1 Reply Last reply Reply Quote 0
          • D
            devianceluka
            last edited by

            @johnpoz:

            Yeah you only have 120mbit connection - there is not going to be an issue using that with virtual nic vs passthru..  Your thinking is wrong on the performance..  If your worried about virtual performance - then you shouldn't be running virtual at all.  Since you can not get over the mindset of using it the way its designed.

            My file storage is box is VM, its nic (vmxnet3) is vm connected to the vswitch that connects to the physical world with cheap nics, on a N40L box..  And I get great performance to and from the real world network.

            here is my workstation to my VM storage box.

            –----------------------------------------------------------
            Client connecting to storage.local.lan, TCP port 5001
            TCP window size:  256 KByte

            [344] local 192.168.1.100 port 52507 connected with 192.168.1.8 port 5001
            [ ID] Interval      Transfer    Bandwidth
            [344]  0.0-10.0 sec  1.06 GBytes  912 Mbits/sec

            Why should I be worried about performance on that??

            phy box - switch – phy nic (N40L) -- vswitch - vm nic (storage vm)

            Again its not about the throughput only, its about LATENCY and MANY connections at the same time from many devices where latencies play a role.

            Stick to the subject please. I want and I need passthrough.

            So now this is a "known" bug or something. Maybe driver fault? Can I update drivers somehow?

            Would something in System>Advanced>Networking solve it?

            Any other suggestion?

            1 Reply Last reply Reply Quote 0
            • johnpozJ
              johnpoz LAYER 8 Global Moderator
              last edited by

              Latency really on a freaking LAN what could the pos be .001 seconds ? Your nuts if u think latency going to be an issue phy vs virt. Your causing yourself grief for no reason

              An intelligent man is sometimes forced to be drunk to spend time with his fools
              If you get confused: Listen to the Music Play
              Please don't Chat/PM me for help, unless mod related
              SG-4860 24.11 | Lab VMs 2.8, 24.11

              1 Reply Last reply Reply Quote 0
              • D
                devianceluka
                last edited by

                Hopefully found the solution:

                System > Advanced > Networking

                Check Enable device polling
                Uncheck everything below
                Reboot

                Survived for 3 host reboots already
                2x Intel i210AT on Supermicro X10SL7-F

                1 Reply Last reply Reply Quote 0
                • D
                  devianceluka
                  last edited by

                  It isn't quite solved yet. Always have atleast 50-51% CPU inside pfsense. That means one of the cores is always maxed out. Anyone got any idea in this area what these switches do and why they kind of fixed this?

                  EDIT: And that is on an idle connection and pfsense idling!

                  1 Reply Last reply Reply Quote 0
                  • W
                    wcrowder
                    last edited by

                    @devianceluka:

                    It isn't quite solved yet. Always have atleast 50-51% CPU inside pfsense. That means one of the cores is always maxed out. Anyone got any idea in this area what these switches do and why they kind of fixed this?

                    EDIT: And that is on an idle connection and pfsense idling!

                    You are sure your reading the graph right? On the performance tab of vSphere? It can be confusing looking at the left side it says percent, on the right side it say MHz. The graph varies by the maximum usage, if your max was 200 MHz and you are using 100MHz right now it will say 50%? My current usage is 62 MHz with Max usage at 1291 MHz. The Max is so high on mine because ever hour a 2 minute script runs that uses a lot of processor.

                    1 Reply Last reply Reply Quote 0
                    • S
                      segobi
                      last edited by

                      I have the same problem with pfSense 2.1.5 and esxi 5.5 with Intel I350 Nics.
                      Works when you reboot the VM -

                      On latest pfsense 2.2 Snapshot it seems to be working correctly without the need to reboot.

                      1 Reply Last reply Reply Quote 0
                      • M
                        mysongranhills
                        last edited by

                        http://christopher-technicalmusings.blogspot.com/2012/12/passthrough-pcie-devices-from-esxi-to.html

                        Try that. it seems to be a BSD/ESXi issue. Found it while dealing w/ a similar problem

                        1 Reply Last reply Reply Quote 0
                        • M
                          msmith9xr4
                          last edited by

                          Did you ever get farther on this?

                          I'm going to try device polling before next reboot and see if that helps me. The tickboxes below… I can't see how those would impact no connectivity post "first" reboot. It's 100% reliable- every SECOND reboot is fine. I am sure all vmware and passed through intel nics support polling fine.

                          I agree 100% you need to passthrough, virt NICs are just not good enough for replacing baremetal intelligently. Even when I only had 200MB in, I could see a huge loss on the ESX nic....Even played with the 3 different driver options you can pass to hack toward pfsense, always lossy. Can't happen with voice and other stuff.

                          The dual reboot hing makes me wonder if its a slice thing- I know the flash installs to two slices...And seem to remember reading they alternate at every reboot. Any comments on that?

                          Next week I will have one wan on gigabit/300 and the other at 200/30. Of course you need a good intel card for those, and to be smart to even see the throughput behind pfsense.

                          I think I ordered a quad ET 82576, my dual ET plus single 82574 CT pass through fine and dandy to the two wans which still are just 300/300 and 200/30.

                          ESX 5.1 is on x9scm-f e3-1230 32GB running lots of PCI passthrough to other stuff too.

                          pfsense 2.2's limiters are bustd. so 2.1.5 is best for me for now.

                          I may get around to trying a 2.2 pfsense and see if reboot works. Last time I tried the upgrade it broke everything, which I later found out was just because 2.2 busted limiters.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.