Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    pfsense VM gets stuck

    Scheduled Pinned Locked Moved Virtualization
    11 Posts 4 Posters 1.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • P
      panicos
      last edited by panicos

      Hello everyone,

      I have been using pfsense as a VM deployed in a esxi host 6.0 for 2 years now with no problems.
      Now, i have upgraded the vmware host to 6.5 due to the flash not being supported and all..
      So i reattached all my VMs in the new esxi and all was good, except the pfsense one got stuck after one day of working.

      esxi version - 6.5.0U3
      pfsense version - 2.5.2-RELEASE (amd64)

      The pfsense was not processing traffic anymore; tried to ping something from it (in the console) and i got "no buffer space". Also netstat -rn displayed only ipv6 routes (which i don t use), although route get 8.8.8.8 displayed the correct IPv4 path.
      I tried to :
      -ifconfig down and up the interface
      -reboot the machine
      -shutdown the machine
      -disconnect and reconnect the network adapter of the machine
      -upgrade the VM hardware compatibility
      After reboots and shutdowns, there was no "no buffer space" displayed, but other replies to pings, such as "host is down". Nevertheless, the unreachable behavior was the same.

      Nothing worked from the above.
      The only thing that solved the problem (for the moment, because i think it will reappear) was to reboot the entire esxi host, which is not ok at all.

      I know, there is a pfsense tshoot guide here https://docs.netgate.com/pfsense/en/latest/troubleshooting/buffer-space-errors.html
      I have followed it did not help, nor do i see a solution in it.
      I have checked for other similar topics on google, but found only assumptions and opinions, but no solution.

      Does anyone have any suggestion of how should be this problem approached? Is there a fix, an upgrade or something?

      mr.roshM GertjanG 2 Replies Last reply Reply Quote 0
      • mr.roshM
        mr.rosh @panicos
        last edited by

        @panicos deploy a new pfsense vm? and transfer config, see the outcome.

        P 2 Replies Last reply Reply Quote 0
        • GertjanG
          Gertjan @panicos
          last edited by

          @panicos

          I'm not using esxi myself but I would check if you esxi version has known issues with stock FreeBSD 12.2. Look for guide line how to make FreeBSD 1..2 works with your VM.

          Any suspect message on the console while booting ?
          Like NIC's are found etc ?

          No "help me" PM's please. Use the forum, the community will thank you.
          Edit : and where are the logs ??

          P 1 Reply Last reply Reply Quote 0
          • P
            panicos @mr.rosh
            last edited by

            @mr-rosh you think that would solve it? I will wait and see if it jams again, then i will try your advice also.

            1 Reply Last reply Reply Quote 0
            • P
              panicos @Gertjan
              last edited by panicos

              @gertjan there is no suspect message on console while booting. Looking at a VMware Compatibility Guide, i see free bsd 12.2 is supported starting with esxi 6.7, while on my version the supported one is free bsd 11.x
              I anyway, just have a dropdown to choose freebsd 32 or 64 bit (no version to choose from).
              On the other hand, the former esxi 6.0 which i had been runing for 2 years, had no freebsd support (according to the ocmpatibility matrix vmware) and it was working flawlesy (the pfsense i mean).

              1 Reply Last reply Reply Quote 0
              • P
                panicos @mr.rosh
                last edited by

                @mr-rosh it is not working. i tried. reinstalled from scratch the pfsense in a new VM. i get the same problem. When i push some traffic through it, it gets stuck , innacessible. Only rebooting the host helps.

                I am geting panicked

                mr.roshM 1 Reply Last reply Reply Quote 0
                • mr.roshM
                  mr.rosh @panicos
                  last edited by

                  @panicos whats the network adapter on esxi host?

                  P 1 Reply Last reply Reply Quote 0
                  • P
                    panicos @mr.rosh
                    last edited by panicos

                    @mr-rosh i have not physically opened the host to check it, but based on what i see in the cli , it is a Intel Gigabit ET Quad Port Server Adapter

                    [root@localhost:~] vmware -vl
                    VMware ESXi 6.5.0 build-17167537
                    VMware ESXi 6.5.0 Update 3
                    [root@localhost:~]
                    [root@localhost:~] esxcli network nic list | grep In
                    vmnic2 0000:03:00.0 igb Up Up 1000 Full 00:1b:21:9f:d0:08 1500 Intel Corporation 82576 Gigabit Network Connection
                    vmnic3 0000:03:00.1 igb Up Up 1000 Full 00:1b:21:9f:d0:09 1500 Intel Corporation 82576 Gigabit Network Connection
                    vmnic4 0000:04:00.0 igb Up Up 1000 Full 00:1b:21:9f:d0:0c 1500 Intel Corporation 82576 Gigabit Network Connection
                    vmnic5 0000:04:00.1 igb Up Up 1000 Full 00:1b:21:9f:d0:0d 1500 Intel Corporation 82576 Gigabit Network Connection
                    [root@localhost:~]

                    [root@localhost:~] esxcli network nic get -n vmnic2
                    Advertised Auto Negotiation: true
                    Advertised Link Modes: 10BaseT/Half, 10BaseT/Full, 100BaseT/Half, 100BaseT/Full, 1000BaseT/Full
                    Auto Negotiation: true
                    Cable Type: Twisted Pair
                    Current Message Level: 7
                    Driver Info:
                    Bus Info: 0000:03:00.0
                    Driver: igb
                    Firmware Version: 1.5, 0x00011d40
                    Version: 5.3.3
                    Link Detected: true
                    Link Status: Up
                    Name: vmnic2
                    PHYAddress: 1
                    Pause Autonegotiate: true
                    Pause RX: false
                    Pause TX: false
                    Supported Ports: TP
                    Supports Auto Negotiation: true
                    Supports Pause: true
                    Supports Wakeon: true
                    Transceiver: internal
                    Virtual Address: 00:50:56:57:1a:86
                    Wakeon: MagicPacket(tm)
                    [root@localhost:~]

                    [root@localhost:~] vmkchdev -l |grep vmnic[2-5]
                    0000:03:00.0 8086:10e8 8086:a02c vmkernel vmnic2
                    0000:03:00.1 8086:10e8 8086:a02c vmkernel vmnic3
                    0000:04:00.0 8086:10e8 8086:a02c vmkernel vmnic4
                    0000:04:00.1 8086:10e8 8086:a02c vmkernel vmnic5
                    [root@localhost:~]

                    P 1 Reply Last reply Reply Quote 0
                    • P
                      panicos @panicos
                      last edited by

                      I have found the problem in the meantime: i have a nic teaming between esxi and the physical switch; although i have configured this following official guides, it appears the arp broadcast is not flowing through it and the vswitch becomes saturated, denying all the traffic. maybe i am hitting a but or something. I will carry on from here on the virtualization part.
                      Thanks everyone for the suggestions.
                      Ticket may be closed.

                      awebsterA 1 Reply Last reply Reply Quote 0
                      • awebsterA
                        awebster @panicos
                        last edited by

                        @panicos When using multiple NICs, it is critical that the VMWare NIC Teaming be configured correctly to match the switch to which the ports are connected.
                        Based on personal experience, I find that Route based on the originating virtual port ID works ok if there are no LAGs created on the switch, but that if you are using a switch LAG, then route based on IP hash is needed.

                        –A.

                        P 1 Reply Last reply Reply Quote 0
                        • P
                          panicos @awebster
                          last edited by

                          @awebster yes, you are right. i have an etherchannel on the switch and nic teaming with IP hash on the esx. the configuration is not a problem; i followed guides for it.
                          Problem might be with the network card's driver in esxi, which although it is supported, it might have some problems.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.