Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Packetloss on pfsense firewall

    Scheduled Pinned Locked Moved General pfSense Questions
    32 Posts 5 Posters 5.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • 1
      1-21Giggawatts
      last edited by

      Its using the 82583V network card drivers - are these fully supported with this version of Pfsense?

      em0@pci0:1:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
      vendor = 'Intel Corporation'
      device = '82583V Gigabit Network Connection'
      class = network
      subclass = ethernet
      em1@pci0:2:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
      vendor = 'Intel Corporation'
      device = '82583V Gigabit Network Connection'
      class = network
      subclass = ethernet
      em2@pci0:3:0:0: class=0x020000 card=0x00008086 chip=0x150c8086 rev=0x00 hdr=0x00
      vendor = 'Intel Corporation'
      device = '82583V Gigabit Network Connection'
      class = network
      subclass = ethernet

      1 Reply Last reply Reply Quote 0
      • 1
        1-21Giggawatts
        last edited by

        One thing i notice in the demsg dump is that it only seems to be loading drivers for 3 NIC's - there are 4 on the unit -perhaps that is causing an issue? There are only 2 connected which is correct according to link state. Any help with this much appreciated - its driving me nuts!

        em0: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xe000-0xe01f mem 0xd0600000-0xd061ffff,0xd0620000-0xd0623fff irq 16 at device 0.0 on pci1
        em0: Using an MSI interrupt
        em0: Ethernet address: 00:e0:67:05:24:40
        em0: netmap queues/slots: TX 1/1024, RX 1/1024

        em1: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xd000-0xd01f mem 0xd0500000-0xd051ffff,0xd0520000-0xd0523fff irq 18 at device 0.0 on pci2
        em1: Using an MSI interrupt
        em1: Ethernet address: 00:e0:67:05:24:42
        em1: netmap queues/slots: TX 1/1024, RX 1/1024

        em2: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xc000-0xc01f mem 0xd0400000-0xd041ffff,0xd0420000-0xd0423fff irq 19 at device 0.0 on pci3
        em2: Using an MSI interrupt
        em2: Ethernet address: 00:e0:67:05:24:43
        em2: netmap queues/slots: TX 1/1024, RX 1/1024

        em0: link state changed to UP
        em1: link state changed to UP

        1 Reply Last reply Reply Quote 0
        • 1
          1-21Giggawatts
          last edited by

          Im probably going to try and go back to an earlier version - is there any way to export the configuration for 2.4.5 so i dont have to configure 2.4.4 from scratch?

          GertjanG 1 Reply Last reply Reply Quote 0
          • GertjanG
            Gertjan @1-21Giggawatts
            last edited by

            @1-21Giggawatts said in Packetloss on pfsense firewall:

            • is there any way to export the configuration for 2.4.5 so i dont have to configure 2.4.4 from scratch?

            Diagnostics > Backup & Restore > Backup & Restore

            Before you shift back : pfSense 2.4.5 uses FreeBSD 11.3-STABLE and has pretty good Intel NIC support.
            But ...

            Install Google and type
            FreeBSD Intel 82583V

            and heck out the first link found ... 235147 – em(4) driver not working for Intel 82583V Gigabit chip
            This bug report concerns FreeBSD 12.0 and mentions issues with the 82583V NICs and also states that FreeBSD 11.2 - which was the FreeBSD version used by 2.4.4.p3, did work correctly.
            FreeBSD 11.3 probably included the new '82583V" drivers that 12.0 and up also used.

            A patch was proposed. can't tell if that was also backported to FreeBSD 11.3.

            Btw : IMHO : not entirely the fault of FreeBSD. It seemed to have shut down the support of old 'mechanical' interrupt handling. Or, some boards out there use modern Giga bit NIC's - but they are hooked up in the system the old way ... (which NOT support real Giga connections at all , or placing a huge load on the system while doing so...)

            Is there something you can do in your BIOS to overcome the NIC issue ?

            No "help me" PM's please. Use the forum, the community will thank you.
            Edit : and where are the logs ??

            1 1 Reply Last reply Reply Quote 0
            • 1
              1-21Giggawatts @Gertjan
              last edited by

              @Gertjan Im not 100% on which driver is is using do these lines from my genral system log at bootup indicate i am using em1 driver? what does the em1 signify?
              May 20 16:57:06 kernel em1: link state changed to UP

              I also noticed that it was using the same IRQ number for one of my interfaces as the <ACPI PCI-PCI bridge> - would that cause an issue like this if that happened also to be the same IRQ for the LAN interface?

              May 20 15:48:41 kernel em1: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xd000-0xd01f mem 0xd0500000-0xd051ffff,0xd0520000-0xd0523fff irq 18 at device 0.0 on pci2
              May 20 15:48:41 kernel pci2: <ACPI PCI bus> on pcib2
              May 20 15:48:41 kernel pcib2: [GIANT-LOCKED]
              May 20 15:48:41 kernel pcib2: <ACPI PCI-PCI bridge> irq 18 at device 28.2 on pci0

              1 Reply Last reply Reply Quote 0
              • 1
                1-21Giggawatts
                last edited by

                Its a little custom box for pfsense - when connecting to a monitor and rebooting i dont see any options for getting into the BIOS which is a real pain, i only see options for ctrl-s to open the intel boot agent, which doesnt give me access to any of the IRQ settings..

                All getting a bit too hard, does anyone know if I export my configuration from 2.4.5 it can be imported again to 2.4.4?

                GertjanG 1 Reply Last reply Reply Quote 0
                • GertjanG
                  Gertjan @1-21Giggawatts
                  last edited by

                  @1-21Giggawatts said in Packetloss on pfsense firewall:

                  what does the em1 signify?

                  Intel NIC's use the driver that identifies itself as "em" - old Intel ones are known as 'fxp' - as Realtek is known as 'rl' etc.
                  The first NIC found is registred as em1 - the second as em2 and so on.

                  Btw : you really should have em1, em2, em3 and em4 on your system. The fact some are hooked up or not, doesn't make a difference.
                  I have also a Quand Intel NIC card, and all 4 exists. Although I have only 2 of them assigned to interfaces.
                  The fact that you only have 3 out of 4 means : NIC => bad. One of them, em3 isn't found. The means troubles. Which is great, because your system has troubles .... so you know what to do next ;)

                  This is my 'kernel hardware detection log (dmesg)' :

                  First two lines : the system discovers it has a PCI bus ...

                  pcib3: <ACPI PCI-PCI bridge> at device 30.0 on pci0
                  pci2: <ACPI PCI bus> on pcib3
                  

                  and then the first card is found - an old quand Intel NIC :

                  em0: <Intel(R) PRO/1000 Legacy Network Connection 1.1.0> port 0xd8c0-0xd8ff mem                                                                                                                       0xef980000-0xef99ffff,0xefa00000-0xefa3ffff irq 18 at device 2.0 on pci2
                  em0: Ethernet address: 6c:b3:11:50:c6:c6
                  em0: netmap queues/slots: TX 1/256, RX 1/256
                  em1: <Intel(R) PRO/1000 Legacy Network Connection 1.1.0> port 0xdc00-0xdc3f mem                                                                                                                       0xef9a0000-0xef9bffff,0xefa40000-0xefa7ffff irq 19 at device 2.1 on pci2
                  em1: Ethernet address: 6c:b3:11:50:c6:c7
                  em1: netmap queues/slots: TX 1/256, RX 1/256
                  em2: <Intel(R) PRO/1000 Legacy Network Connection 1.1.0> port 0xdc40-0xdc7f mem                                                                                                                       0xef9c0000-0xef9dffff,0xefa80000-0xefabffff irq 19 at device 3.0 on pci2
                  em2: Ethernet address: 00:1b:21:32:da:42
                  em2: netmap queues/slots: TX 1/256, RX 1/256
                  em3: <Intel(R) PRO/1000 Legacy Network Connection 1.1.0> port 0xdc80-0xdcbf mem                                                                                                                       0xef9e0000-0xef9fffff,0xefac0000-0xefafffff irq 16 at device 3.1 on pci2
                  em3: Ethernet address: 00:1b:21:32:da:43
                  em3: netmap queues/slots: TX 1/256, RX 1/256
                  

                  The onboard NIC is found :

                  fxp0: <Intel 82801GB (ICH7) 10/100 Ethernet> port 0xdcc0-0xdcff mem 0xef97f000-0                                                                                                                      xef97ffff irq 20 at device 8.0 on pci2
                  miibus0: <MII bus> on fxp0
                  inphy0: <i82562ET 10/100 media interface> PHY 1 on miibus0
                  inphy0:  10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto, auto-flow
                  fxp0: Ethernet address: 00:12:3f:b3:58:75
                  

                  You can see that there are 4 "em" NIC's found, and the "fxp" (the onboard Intel NIC).
                  Again, only fxp0, em0 and em1 are actually hooked up.

                  @1-21Giggawatts said in Packetloss on pfsense firewall:

                  I also noticed that it was using the same IRQ number for one of my interfaces as the <ACPI PCI-PCI bridge> - would that cause an issue like this if that happened also to be the same IRQ for the LAN interface?

                  IRQ are auto distributes these days That's what they been called ACPI is all about : alocating resources among devices found during boot.
                  A quad NIC should/could share the same IRQ - why not.

                  @1-21Giggawatts said in Packetloss on pfsense firewall:

                  All getting a bit too hard, does anyone know if I export my configuration from 2.4.5 it can be imported again to 2.4.4?

                  Of course.
                  It's been done all the time.

                  No "help me" PM's please. Use the forum, the community will thank you.
                  Edit : and where are the logs ??

                  1 Reply Last reply Reply Quote 0
                  • 1
                    1-21Giggawatts
                    last edited by 1-21Giggawatts

                    Thanks for the clarification on em drivers. yeah it looked like something is borked as it only found 3. I bit the bullet and re-installed 2.4.5 fresh and restored config tonight - problem has dissappeared - happy days! I will check my kernel logs and see if it finds all 4 NIC's now.

                    1 Reply Last reply Reply Quote 0
                    • 1
                      1-21Giggawatts
                      last edited by

                      Yup - thats better

                      em0: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xe000-0xe01f mem 0xd0700000-0xd071ffff,0xd0720000-0xd0723fff irq 16 at device 0.0 on pci1
                      em0: Using an MSI interrupt
                      em0: Ethernet address: 00:e0:67:05:24:40
                      em0: netmap queues/slots: TX 1/1024, RX 1/1024
                      pcib2: <ACPI PCI-PCI bridge> irq 17 at device 28.1 on pci0
                      pcib2: [GIANT-LOCKED]
                      pci2: <ACPI PCI bus> on pcib2
                      em1: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xd000-0xd01f mem 0xd0600000-0xd061ffff,0xd0620000-0xd0623fff irq 17 at device 0.0 on pci2
                      em1: Using an MSI interrupt
                      em1: Ethernet address: 00:e0:67:05:24:41
                      em1: netmap queues/slots: TX 1/1024, RX 1/1024
                      pcib3: <ACPI PCI-PCI bridge> irq 18 at device 28.2 on pci0
                      pcib3: [GIANT-LOCKED]
                      pci3: <ACPI PCI bus> on pcib3
                      em2: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xc000-0xc01f mem 0xd0500000-0xd051ffff,0xd0520000-0xd0523fff irq 18 at device 0.0 on pci3
                      em2: Using an MSI interrupt
                      em2: Ethernet address: 00:e0:67:05:24:42
                      em2: netmap queues/slots: TX 1/1024, RX 1/1024
                      pcib4: <ACPI PCI-PCI bridge> irq 19 at device 28.3 on pci0
                      pcib4: [GIANT-LOCKED]
                      pci4: <ACPI PCI bus> on pcib4
                      em3: <Intel(R) PRO/1000 Network Connection 7.6.1-k> port 0xb000-0xb01f mem 0xd0400000-0xd041ffff,0xd0420000-0xd0423fff irq 19 at device 0.0 on pci4
                      em3: Using an MSI interrupt
                      em3: Ethernet address: 00:e0:67:05:24:43
                      em3: netmap queues/slots: TX 1/1024, RX 1/1024

                      Thank you for your informative help Gertjan - its appreciated!

                      1 Reply Last reply Reply Quote 0
                      • 1
                        1-21Giggawatts
                        last edited by

                        And after a day - the issue is back... OK 2.4.4 it is then.

                        1 Reply Last reply Reply Quote 0
                        • P
                          perlenbacher
                          last edited by perlenbacher

                          Before flattening your install, update your system after selecting the latest Dev branch in the GUI.
                          It would only take 2 minutes.
                          2.5.0a may suit you better...
                          It is built on FreeBSD 12.1-STABLE

                          1 Reply Last reply Reply Quote 0
                          • 1
                            1-21Giggawatts
                            last edited by 1-21Giggawatts

                            Good idea - i tried 2.5 last night but still having the same lockup / packetloss issues.

                            I have found a website that has an archive of older version, will roll back and find out if its actually my hardware thats stuffed.

                            1 Reply Last reply Reply Quote 0
                            • 1
                              1-21Giggawatts
                              last edited by

                              got 2.4.3 running again now - lets see how it goes

                              1 Reply Last reply Reply Quote 0
                              • 1
                                1-21Giggawatts
                                last edited by

                                In order to install snort had to update to 2.4.4(3) hoping that isnt where the issues started ;-/

                                1 Reply Last reply Reply Quote 0
                                • 1
                                  1-21Giggawatts
                                  last edited by

                                  Ok i have tried just about everything with this. I have come to the conclusion is most likely a hardware error. Still getting packetloss to the device on internal interface every few hours for around 5 seconds.

                                  I have connected the switch directly to my Cisco switch rather than use the conduit cables in the wall to eliminate those - changed all cables. Changed the switchport in the Cisco switch - no errors on ports. Tested with all of the available interfaces in my device em0,1,2,3. When the error occurs I dont drop packets to any other devices connected on same vlan on the Cisco switch - its only the firewall. I am running a yanling n10 plus device, 4 nics and

                                  I thought perhaps it could be a BSD issue - so i installed HP's ClearOS 7.6.0 to compare which runs on a linux kernel - but the problem is still there. I have installed Pfsense 2.4.3 2.4.4 and 2.4.5 - I also tried OPNsense 20.1 which runs on a more recent version of BSD too, nothing has fixed this problem yet.

                                  I guess the only other issue is to change the internal IP just in case something on my network is trying trying to use that IP occasionally - although i would expect to see a macflap alert on my switch log if that were the case..

                                  1 Reply Last reply Reply Quote 0
                                  • 1
                                    1-21Giggawatts
                                    last edited by 1-21Giggawatts

                                    Last throw of the dice - I decided to try IPFire - I still really wanted something that incorporated inline IPS and that I could use my snort VRT subscription with.

                                    Downloaded v2.25 last night - installed and its still going strong. Got through my morning MS Teams meeting with 0 packetloss. Running a ping test to internal interface for around 8 hours so far and it hasnt dropped a beat. Fantastic!

                                    The firewall is not as intuitive or as fully featured as pfsense - the GUI is fairly archaic looking - however it seems quick and most importantly for me - stable with my hardware!

                                    A pity that Pfsense stopped working for me - perhaps I will try the next major release - but until then I will just stick with IPFire

                                    1 Reply Last reply Reply Quote 0
                                    • 1
                                      1-21Giggawatts @jimp
                                      last edited by

                                      @jimp Looks like the issue may have been some BSD driver for my hardware - im assuming the <Intel(R) PRO/1000 Network Connection 7.6.1-k>?

                                      1 Reply Last reply Reply Quote 0
                                      • F
                                        fezster
                                        last edited by

                                        Identical issue here! (And quite a few of us it seems).

                                        See my thread here: https://forum.kitz.co.uk/index.php/topic,24600.60.html

                                        Ive been running OPNSense 20.1 (FreeBSD 11.2) for almost a week without issue. No packet loss, no high ping etc.

                                        I found this thread when searching for whether PfSense 2.4.4-p3 (also based on FreeBSD 11.2) would resolve the issue - did you ever try this?

                                        1 Reply Last reply Reply Quote 0
                                        • 1
                                          1-21Giggawatts
                                          last edited by

                                          Yep I ran version 2.4.4(3) and 2.3.4 same problems - also tried version 20.1 OPNsense same issue. Ive been running IPFire 2.25 for over a week now with zero issues (other than suricata does not parse the snort VRT ruleset very well)

                                          Perhaps when thewy release a new version of pfsense i will take a look but I just want a stable firewall with inline IPS capabilities - so IPFire is doing that for me now.

                                          1 Reply Last reply Reply Quote 1
                                          • F
                                            fezster
                                            last edited by

                                            Resolved by putting unbound into DNS forward mode, instead of resolver.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.