PfSense 2.3 LAN interface stops routing traffic - stops working after 2 or 3 day
-
The cpu interrupt spike is what happened when mine had the issue too. It stayed high even when I failed back over to the 2.1.5 primary cluster member and very little traffic was going to the secondary 2.3 member. I had to reboot to get the high interrupt back to near zero. I had this happen interestingly right at 5am too when it happened several days ago :).
I haven't had another incident yet since 2 days ago so far. Maybe it is just a matter of time though. These are the changes I made that were different than when it stopped passing some percentage of traffic on the WAN last time…
- Removed hw.igb.num_queues=2 from the /boot/loader.conf.local file (now defaults to 0 which I think means self tuning)
- Removed hw.igb.rx_process_limit=1000 from /boot/loader.conf.local file (now defaults to 100)
- Changed my IP Aliases from being assigned to the WAN interface to being assigned to CARP on secondary. Primary still had IP Aliases on CARP IP as it was not upgraded. This meant my IP Aliases were up on both members until I switched to the secondary. It is a backup site so I didn't realize it as production traffic is not going there. Only transaction logs and nfs copying between sites over ipsec. I don't think this is related because CARP was disabled on the primary when the traffic stopped flowing 10 hours later so while the IPs were up on both for awhile when I switched to the secondary server no IP Aliases were up on the primary pfsense 2.1.5 server.
So far so good but I have only had one time where some percentage of traffic stopped being received from the WAN interface and it was 10 hours after switching to the secondary pfsense 2.3 firewall.
My scenario might just be related to the num_queues thing that I had leftover from the previous 2.1.5 version.
None of those things apply to me in my case so the underlying issue must be something else.
-
Updated to 2.3 - running in a Parallels VM on OS/X. I had the same issue - lan would stop responding, while Wan/VPN was responsive. I had pfblockerNG running hourly updates, so changed that to daily. Also removed DHCP registration and Static DHCP registration from DNS Resolver. I don't know which one fixed it, but have not had a hang requiring reboot now for 5 days.
The system seemed to hang just after the top of the hour 3 or 4 times per day (hence the cron change) and there were also DHCP logs which stated that a static IP address had changed its MAC address from its MAC address to the same MAC address hence the DHCP change in Resolver).
Whichever it was, no probs now.
Hope that helps
-
Updated to 2.3 - running in a Parallels VM on OS/X. I had the same issue - lan would stop responding, while Wan/VPN was responsive. I had pfblockerNG running hourly updates, so changed that to daily. Also removed DHCP registration and Static DHCP registration from DNS Resolver. I don't know which one fixed it, but have not had a hang requiring reboot now for 5 days.
The system seemed to hang just after the top of the hour 3 or 4 times per day (hence the cron change) and there were also DHCP logs which stated that a static IP address had changed its MAC address from its MAC address to the same MAC address hence the DHCP change in Resolver).
Whichever it was, no probs now.
Hope that helps
I removed pfblocker completely and it still froze so I don't think that's the culprit.
I do not have DHCP running on pfsense.
I wish those were my issues but sadly they are not.
-
I don't use dhcp or pfblocker. Firewall still running well for me going on 3 days.
-
I set pfblocker to now only update once a day. We shall see.
-
I'm not running pfblocker or any services outside of openvpn/ipsec, other than that it's a clean install on this hardware: https://www.supermicro.com/products/motherboard/Atom/X10/A1SRi-2558F.cfm
The problem still persists even with the custom kernel so I'd be surprised if any of these services in particular are the cause.
-
If the custom kernel is the one provided by cmb, that disabled the netmap stuff. What is a bit interesting is IPsec; I think alot or all of folks talking about this problem/symptom they have IPsec and em/igb interfaces involved.
Would it be possible to disable the IPsec VPNs temporarily? That would be an interesting data point. If the problem goes away, that narrows down the search for root cause. If it doesn't, then it's not a factor.Just to make it clear, I'm not part of or associated with pfSense, just a user that likes puzzles.
-
@mer:
If the custom kernel is the one provided by cmb, that disabled the netmap stuff. What is a bit interesting is IPsec; I think alot or all of folks talking about this problem/symptom they have IPsec and em/igb interfaces involved.
Would it be possible to disable the IPsec VPNs temporarily? That would be an interesting data point. If the problem goes away, that narrows down the search for root cause. If it doesn't, then it's not a factor.Just to make it clear, I'm not part of or associated with pfSense, just a user that likes puzzles.
I am actually thinking you are correct. Unfortunately I can't disable IPsec because it is essential for our sites to function properly. I am however using em/igb at these three test sites so that might explain a few things. Perhaps a bad network driver is causing the issue?
-
We've confirmed it's not specific to any particular NIC. Happens on em, igb, and re at a minimum, and probably anything.
It seems to be related to UDP traffic streams across IPsec. dd /dev/urandom to UDP netcat with a bouncer back on the other side, and it's replicable within a few minutes to a few hours. Faster CPUs seem to be less likely to hit the issue quickly.
It seems like it might be specific to SMP (>1 CPU core). I haven't been able to trigger it on an ALIX even beating it up much harder, relative to CPU speed, than faster systems where it is replicable.
If you're running on a VM and seeing this issue with >1 vCPU, try changing your VM to 1 vCPU.
If you're on a physical system, you can force the OS to disable additional cores. Take care when doing this, try it on a test system first if you're not comfortable with doing things along these lines.
dmesg | grep cpu
to find the apic. You'll have something like:
cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 2
In /boot/loader.conf.local (create file if it doesn't exist), add:
hint.lapic.2.disabled=1
where 2 is the APIC ID of the cpu1 CPU core to disable. Replace accordingly if yours isn't 2. Add more lines like that for each additional CPU to disable so you only have cpu0 left enabled. Reboot.
Then report back whether it continues to happen. That seems to suffice as a temporary workaround.
-
I seen to be running in a very similar problem. 2.3 was running fine for days and yesterday evening everything was dead. Or so I believed. I restarted and everything was back. This morning: Internet dead again. Restarted, but still no change. Restarted, everything ok. I then upgraded to 2.3.1, restarted, everything dead. So I attached the serial console just to find that the system was up, WAN connected, responsive, LAN IP attached… Just no traffic on LAN.
I simply did ifconfig igb2 down and up and all was running.
Yes I have IPSEC (and I unfortunately need it). It is on an APU with 4 cores. What is puzzling me is why this is happening right after reboot?
-
I do nfs copies over ipsec between sites so we definitely send some udp traffic through the firewall. It is not a lot of traffic though so maybe that is why it only happened once so far on my system. Almost 4 days now. Initially it happened after 10 hours.
-
@cmb:
We've confirmed it's not specific to any particular NIC. Happens on em, igb, and re at a minimum, and probably anything.
It seems to be related to UDP traffic streams across IPsec. dd /dev/urandom to UDP netcat with a bouncer back on the other side, and it's replicable within a few minutes to a few hours. Faster CPUs seem to be less likely to hit the issue quickly.
It seems like it might be specific to SMP (>1 CPU core). I haven't been able to trigger it on an ALIX even beating it up much harder, relative to CPU speed, than faster systems where it is replicable.
If you're running on a VM and seeing this issue with >1 vCPU, try changing your VM to 1 vCPU.
If you're on a physical system, you can force the OS to disable additional cores. Take care when doing this, try it on a test system first if you're not comfortable with doing things along these lines.
dmesg | grep cpu
to find the apic. You'll have something like:
cpu0 (BSP): APIC ID: 0 cpu1 (AP): APIC ID: 2
In /boot/loader.conf.local (create file if it doesn't exist), add:
hint.lapic.2.disabled=1
where 2 is the APIC ID of the cpu1 CPU core to disable. Replace accordingly if yours isn't 2. Add more lines line that for each additional CPU to disable so you only have cpu0 left enabled. Reboot.
Then report back if it happens again. That might suffice as a temporary workaround, and will give us additional data points in finding the specific root cause.
AWESOME! Thank you! I will do this and report back. Question, if the system is a dual core with hyper-threading, do the hyper-threads show up as a core as well and do they also need to be disabled?
-
Someone tried to disable the hyper threading in the bios?
For tests only.
-
I always disable hyperthreading on firewalls so they are disabled on my systems when i had the
crashlost packets. -
We've confirmed that the problem no longer occurs after disabling all but one CPU core. So that looks to be a viable immediate workaround for most users. See instructions in my post here.
https://forum.pfsense.org/index.php?topic=110710.msg618388#msg618388I doubt if Hyperthreading is relevant either way. It happens in any SMP system including ones without HT. Any HT cores will also need to be disabled for the workaround, but not because they're HT, just additional cores.
-
I am having the same issue randomly stops routing traffic to all vlans. If i reboot it will be fine for a day or so then does it again
-
Add me to the list as well. I've got this happening on both a SUPERMICRO SYS-5018A-FTN4 1U Rackmount Server (C2758 8-core) as well as a SG-2440 pfSense appliance (C2358 2-core). Both have Intel igb x 4 interfaces on them. And they have IPSEC tunnels (required to reach the colo where our VOIP phone system is). However, what's interesting is it's NOT happening on my home system which is a AMD Athlon System (a dell I got for $250 from Best Buy 4+ years ago) which has dual intel em interfaces on it. I have the same IPSEC tunnels on it (so I have 4 total locations, 2 offices, my home, and a COLO, all 4 running pfSense (the COLO is still 2.2.6), and they're all connected to each other (so every location has 3 IPSEC tunnels)). This didn't start occurring until 2.3. I actually thought maybe this had something to do with AES-NI since the only systems I have AES-NI on are the ones affected..
I'm going to try the single core trick to see if that helps for now, though I'm concerned with speed issues (as I have the 8-core C2758 in a location that has Gigabit because the C2358 maxed out around 600Mbit).. NOTE: I guess it's not the number of cores that cause the C2758 to be able to handle gigabit, but rather the faster clock speed.. Even with 1 core it's still able to handle the full gigabit.. So that's good..
-
@cmb:
We've confirmed that the problem no longer occurs after disabling all but one CPU core. So that looks to be a viable immediate workaround for most users. See instructions in my post here.
https://forum.pfsense.org/index.php?topic=110710.msg618388#msg618388I doubt if Hyperthreading is relevant either way. It happens in any SMP system including ones without HT. Any HT cores will also need to be disabled for the workaround, but not because they're HT, just additional cores.
Disabled all but one core. I will let you know if I continue to have issues. Please let me know when there is a more permanent fix.
-
Hello add me + as well, recently upgraded hardware from an ALIX to a SuperMicro SBE200-9B with 4 IGB NICs, I am getting the watchdog timeout error as well on the new hardware (not on the ALIX) as the LAN IGB1 will drop randomly every few days:
https://dl.dropboxusercontent.com/u/42296/SuperMicroPfsense.JPG
Doing some research prior to finding this thread I found:
https://doc.pfsense.org/index.php/Disable_ACPI
Now reading this I can try to disable the other cores for now. Hoping there is a solution soon.
Love PFsense for many years, cant say that enough!
-
I'm getting this as well. Most of my pfSense are virtual machines running on VMWare ESXi.
I use pfSense for building site-to-site IPSEC tunnels (Blowfish encryption).
In my case it's happening when I see heavy loads across the IPSEC tunnel (this is normally at night, for running backups).
It appears traffic stops completely on the LAN interface. If I look on the console, I see "em0: Watchdog timeout – resetting" or something to that effect (where em1 is my LAN interface).
For encryption, I use Blowfish 256 bit with a SHA512 Hash Algorithm. DH Group Phase 1 - 8192 bit.
For phase 2, I use ESP with Blowfish 256 bit with a SHA512 Hash Algorithm. PFS key group 18 - 8192 bit.
After reducing the DH Key Group + PFS Key Group to to 14 - 2018 bit I have noted an increase in stability (it hasn't locked in about a week). I've just applied this "workaround" on a few other machines I manage, I will report back on this.