Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Pfsense 2.1 vmware cpu host high usage

    Scheduled Pinned Locked Moved Virtualization
    50 Posts 21 Posters 22.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D
      deagle
      last edited by

      I have two ESX hosts and I can run two nodes in CARP, I don't wanna give up redundancy by going to a physical box. I wish we could figure this out, almost everyone I talk to who runs 2.x in ESX has this problem. I tried vmxnet3 NICs today with the latest driver from vmwaretools 9 and still 10x cpu in ESXTOP.

      1 Reply Last reply Reply Quote 0
      • L
        LogicalApex
        last edited by

        I am also experiencing this issue on my ESXi box with pfSense 2.1 running in a VM with the latest open-vm-tools installed.

        The ESXi host is a Dell R620 with Intel NICs.

        I'm running ESXi 5.5. Will be upgrading to 5.5 U1 this weekend to see if that helps any.

        1 Reply Last reply Reply Quote 0
        • K
          kenshirothefist
          last edited by

          Exactly the same here with pfSense 2.1.2 on top of ESXi 5.1u2 … Has this already been identified as a bug? Developers are aware of this issue?

          1 Reply Last reply Reply Quote 0
          • L
            LogicalApex
            last edited by

            @kenshirothefist:

            Exactly the same here with pfSense 2.1.2 on top of ESXi 5.1u2 … Has this already been identified as a bug? Developers are aware of this issue?

            No one seems to acknowledge this. The base recommendation is to use Intel NICs, but in my case I'm using all Intel NICs and the problem persists. I'm also using HCL servers using the official OEM (Dell) install image.

            This problem is still occurring on ESXi 5.5 U1.

            If it matters, I'm using Intel I350 1G NICs.

            1 Reply Last reply Reply Quote 0
            • K
              kenshirothefist
              last edited by

              Has anybody also noticed some increased latency when using pfSense in ESXi 5.x? I haven't had time to test it, but I'm really curios weather this hight ESXi cpu load is just "cpu load issue" or does it also affect firewall efficiency (especially when multiple TCP connections are being handled simultaneously)?

              1 Reply Last reply Reply Quote 0
              • K
                kenshirothefist
                last edited by

                Bump. Any news on this? Anybody solved this?

                1 Reply Last reply Reply Quote 0
                • B
                  biggsy
                  last edited by

                  This keeps popping up.  It might be helpful if people experiencing this problem posted a few standard things about their setup.  Then we might be able to see whether there is something common between them:

                  • What is the ESXi host machine and processor?
                  • Which version of pfSense and whether 32 or 64-bit?
                  • How many vCPUs have you allocated to the VM?
                  • How much memory have you allocated to the VM?
                  • Have you  installed the pfSense packaged VM tools or the VMware-supplied tools?
                  • Are you using the e1000 adapter type or something else?
                  1 Reply Last reply Reply Quote 0
                  • K
                    kenshirothefist
                    last edited by

                    • What is the ESXi host machine and processor?

                    IBM, Intel Xeon X5650

                    • Which version of pfSense and whether 32 or 64-bit?

                    2.1.3 64-bit

                    • How many vCPUs have you allocated to the VM?

                    1 CPU @ 2.67 GHz

                    • How much memory have you allocated to the VM?

                    512 MB

                    • Have you  installed the pfSense packaged VM tools or the VMware-supplied tools?

                    pfSense packaged VM tools

                    • Are you using the e1000 adapter type or something else?

                    E1000

                    BTW: in my particular case there is very low and constant bandwidth (cca. 2 Mbps) but with thousands of active TCP connections (many small packets); currently I have only like 2% CPU load inside pfsense (cca. 50 Mhz), but cca. 1800 MHz Consumed Host CPU

                    1 Reply Last reply Reply Quote 0
                    • B
                      biggsy
                      last edited by

                      kenshirothefist,

                      Forgot to ask:

                      • Any other pfSense packages running?

                      I assume you have seen the last post in this thread https://forum.pfsense.org/index.php?topic=41647.0.  Anything like that going on in your system?

                      I should also say that I've never experienced this problem, even though I've run multiple 32 and 64-bit versions of pfSense on at least four different (HP) hardware platforms since ESXi 3.5 was released.

                      1 Reply Last reply Reply Quote 0
                      • K
                        kenshirothefist
                        last edited by

                        • Any other pfSense packages running?

                        Open-VM-Tools, OpenVPN, pfBlocker, remote logging … however, even if I disable all these packages, cpu host usage still high

                        1 Reply Last reply Reply Quote 0
                        • D
                          deagle
                          last edited by

                          • What is the ESXi host machine and processor?
                            Tried many builds of 5.1 and 5.5 with same result
                            Supermicro X8SIL
                            Intel(R) Xeon(R) CPU X3440 @ 2.53GHz (Lynnfield)

                          • Which version of pfSense and whether 32 or 64-bit?
                            Tried 2.1.1 x64, then tried 2.1.2-3 x86

                          • How many vCPUs have you allocated to the VM?
                            Tried 1, had to bump up to 2 because if this issue, 50Mbit throughput = 70-80% of one physical core

                          • How much memory have you allocated to the VM?
                            Tried 512-2048 MB

                          • Have you  installed the pfSense packaged VM tools or the VMware-supplied tools?
                            Tried packaged tools in the past but since read not to use them. Then tried VMware-supplied, no difference

                          • Are you using the e1000 adapter type or something else?
                            Tried both e1000 and vmxnet3 (w/VMware-supplied driver), no difference.

                          Packages - Avahi, OpenVPN export util, Cron, RRD Summary.
                          It also happens on fresh install.

                          Just to be clear, you have to watch esxtop to see this issue, it doesn't show up in the guest.

                          1 Reply Last reply Reply Quote 0
                          • K
                            kenshirothefist
                            last edited by

                            @biggsy, any news regarding this topic?

                            1 Reply Last reply Reply Quote 0
                            • B
                              biggsy
                              last edited by

                              I can't see anything common between these configs and haven't been able to reproduce it any way.  Only have one machine to play with now though.

                              Have you guys checked that link about speed mismatch?

                              1 Reply Last reply Reply Quote 0
                              • K
                                kenshirothefist
                                last edited by

                                @biggsy:

                                Have you guys checked that link about speed mismatch?

                                I have auto negotiate and it negotiates at 1000 Full … Anyway, I have 20+ running VM's on this host and only this pfSense appliance is having these issues with high pCPU load, although pfSense is the only freeBSD-based VM (others are centos and ubuntu based).

                                1 Reply Last reply Reply Quote 0
                                • B
                                  biggsy
                                  last edited by

                                  The worst I can do is about 93% CPU running a 120 Mbit/s download from AARNET (it's local).

                                  That's with a single vCPU on a Xeon E3-1265L v2 @ 2.5 GHz inside a Gen8 MicroServer.

                                  Idle the VM runs along at about 1.5% CPU  :-[

                                  1 Reply Last reply Reply Quote 0
                                  • D
                                    deagle
                                    last edited by

                                    That's the thing, it has no business doing 93% of one core at 120Mbit, virtualization overhead should be minimal like with other OSes.

                                    I'm starting to think that people who "don't have" this problem aren't really seeing it.

                                    1 Reply Last reply Reply Quote 0
                                    • K
                                      kenshirothefist
                                      last edited by

                                      Is it possible that this is related to VMware virtual machine monitor mode?

                                      http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1036775

                                      For example:

                                      datetime| vmx| MONITOR MODE: allowed modes : BT HV HWMMU
                                      datetime| vmx| MONITOR MODE: user requested modes : BT HV HWMMU
                                      datetime| vmx| MONITOR MODE: guestOS preferred modes: BT HWMMU HV
                                      datetime| vmx| MONITOR MODE: filtered list : BT HWMMU HV

                                      Where:

                                      allowed modes – Refers to the mode that the underlying hardware is capable of.
                                          user requested modes – Refers to the setting defined in the virtual machine configuration.
                                          guestOS preferred modes – Refers to the default values for the selected guest operating system.
                                          filtered list – Refers to the actual monitor modes acceptable for use by the Hypervisor, with the left-most mode being utilized.

                                      I have "automatic" for my pfsense VM and it reads like this:

                                      datetime| vmx| I120: MONITOR MODE: allowed modes          : BT32 HV HWMMU
                                      datetime| vmx| I120: MONITOR MODE: user requested modes  : BT32 HV HWMMU
                                      datetime| vmx| I120: MONITOR MODE: guestOS preferred modes: HWMMU HV BT32
                                      datetime| vmx| I120: MONITOR MODE: filtered list          : HWMMU HV BT32

                                      Therefore it is using hardware MMU and hardware instruction set virtualization … can't change it right now, but can someone test with different settings and post results?


                                      Also, is it possible that is related to using distributed vSwitch? Can someone test by using regular vSwitch vs. distributed vSwitch (again, I can't change my environment to regular vSwitch right now)?

                                      1 Reply Last reply Reply Quote 0
                                      • D
                                        deagle
                                        last edited by

                                        Distributed vSwitch is just an abstraction for many standard vSwitches.

                                        1 Reply Last reply Reply Quote 0
                                        • K
                                          kenshirothefist
                                          last edited by

                                          FYI: there is even more overhead when you go from 1 vCPU to 2 vCPU … I had 2.3 GHz CPU usage when my pfSense VM was configured with 1 vCPU (approaching to limit 2.67 GHz), then I reconfigured VM to use 2 vCPU and now I have 3.0 GHz CPU usage (probably from CPU threads trashing) ... this is really annoying ... and I really don't want to go back to physical ...

                                          1 Reply Last reply Reply Quote 0
                                          • KOMK
                                            KOM
                                            last edited by

                                            I've been following this thread but didn't think it was affecting me.  Then I took a look.  pfSense tells me it's using 2% CPU.  VMware tells me it's using almost nothing.  ESXTop tells me 20%…

                                            My config:

                                            Dell Powervault NX3000 (8 x L5520 @ 2.26 GHz)
                                            ESXi 5.5U1
                                            pfSense 2.1.3 i386
                                            2 x vCPU, 2GB RAM
                                            VM version 8 hardware
                                            Intel E1000 vNICs

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.