Increased Memory and CPU Spikes (causing latency/outage) with 2.4.5
-
@jdeloach I don't recall the exact thread or procedure but you can get snort/suricata to work on 2.4.4-p3 by changing the repository to point to 2.4.4 and installing the "older" versions. Search around a bit if you think you'll have another go at downgrading.
I'm back on 2.4.5 also, it's fine if you just don't touch anything that causes the thing to want to reload the filters... Life goes on such as it is...
-
How to change to the 2.4.4 repository is described here . According to Stephenw10 the old Versions are still online, but I haven't tried it on a fresh install yet.
-
@Artes said in Increased Memory and CPU Spikes (causing latency/outage) with 2.4.5:
How to change to the 2.4.4 repository is described here . According to Stephenw10 the old Versions are still online, but I haven't tried it on a fresh install yet.
I did what stephenw10 posted, I thought, but obviously I did something wrong and didn't see the older version of Snort. It was my first time to try to roll back. It's no big deal as I got the temperature issue with the CPU worked out so it's not running so hot. If this comes up again I'll give it another try. Anyways, at the moment I'm happy with the way it's working with the new version of pfSense.
-
@getcom Can you post here you ticket number so we can follow it on redmine?
I had a quick look and couldn't see it...
Thanks!!
-
Opening a support ticket and a bug report on redmine are two different things. Given the situation and the priorities Netgate has to support their paying customer base I wouldn't hold your breath for a resolution and patch anytime soon. As they (Netgate) said in the release announcement upgrading was not recommended for some customers at this time.
https://www.netgate.com/blog/pfsense-2-4-5-release-now-available.html
-
@tman222 said in Increased Memory and CPU Spikes (causing latency/outage) with 2.4.5:
net.pf.request_maxcount
changing "Firewall Maximum Table Entries" in the gui, is just directly changing the "net.pf.request_maxcount" under syctl output.
you either have to reboot or I think status > filter reload, will update it. but maybe not! -
I also experienced this problem with high CPU usage from 2.4.5 in Hyper-V. The only way to fix it was to limit the virtual CPUs to one. I'm now running my own newly built pfSense box, so the problem is now moot for me.
-
My internet connection was nearly unusable for some days... It was really hard (or let's say: impossible) to have voice- and video calls at my home office. And then search for hours for the root cause after the day of work. Since I had some more problems with my network and changed parts of the equipment (10Gbit Switch and network cards) I didn't think about pfSense being the root cause.
For me it helped to stop the dpinger-service, but the workaround in Post #901871 works much better, since it doesn't stop the advantages of my multi WAN setup (loadbalancing and failover).I opened an issue on Redmine: #10414
At the moment they can't reproduce it. But this thread looks like several installations are affected. Maybe you can also help to find out steps to definitely reproduce the issue so they can do further tests? -
I'm also affected.
HW: SG-4860If the process pfctl has a 100% peak, ping latency is also very high.
Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=1125ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=1613ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=1190ms TTL=55 Reply from 9.9.9.9: bytes=32 time=5ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55 Reply from 9.9.9.9: bytes=32 time=2ms TTL=55
-
Very similar CPU performance issues with another topic on Hyper-V as a VM which I have contributed to. As others have said, it is definitely not a PFBLOCKER issue.
Hyper-V performance
https://forum.netgate.com/topic/149595/2-4-5-a-20200110-1421-and-earlier-high-cpu-usage-from-pfctl/10Physical server performance
https://forum.netgate.com/topic/151819/2-4-5-high-latency-and-packet-loss-not-in-a-vm/2Some users have reported assigning only 1 CPU within the VM resolves the problem but this would suggest there is a multi core issue with the build at this time?
Has anyone had any feedback from Netgate as yet?
-
I have wrote a few days ago that i am ok with pfBlocker and pfSense 2.45 version, but today i reboot pfSense VM and problem is returns, BUT, i have found temporary CRAZY solution, just need to manually update feeds of pfblocker and problem is gone (CPU and RAM is much more using, than on 2.4.4 but it not 100% freeze every few seconds). 2.4.5 is problematic version, and it's more complicated with enabled pfBocker on fresh boot...
-
@Gektor said in Increased Memory and CPU Spikes (causing latency/outage) with 2.4.5:
I have wrote a few days ago that i am ok with pfBlocker and pfSense 2.45 version, but today i reboot pfSense VM and problem is returns, BUT, i have found temporary CRAZY solution, just need to manually update feeds of pfblocker and problem is gone (CPU and RAM is much more using, than on 2.4.4 but it not 100% freeze every few seconds). 2.4.5 is problematic version, and it's more complicated with enabled pfBocker on fresh boot...
I do that after every update to be sure that everything is working. In my case this was not changing anything.
My previous post related to the "Firewall Maximum Table Entries" and downsizing the value below 65535 does not change the behavior if something has to be changed.
It is not only this pfctl patch which is causing the issues. We have check more patches. -
@cuco I have updated #10414 offering to help them reproduce the issue, which I can reproduce at will.
-
Same issue here on a bare metal cluster. Upgraded to 2.4.5 and the CPU spikes and unstable HA. After each change on the primary node, the secondary node becomes master for a second or two and then becomes a slave again. Why? Because the primary node doesn't send a heartbeat ping for 5 seconds while the filter reloads.
It removed the values of "Firewall Maximum States" and "Firewall Maximum Table Entries" so that they fall back to the default (for me states = 3236000 and table entries = 200000). After a reload of the filter all went back to normal (also after a reboot of both machines all still good).
I now have pfBlocker removed and block bogon tables unchecked (if I check them, I have to increase table entries to 400000). When I increase to 500000 and enable blogon networks on the secondary node: the problem returns. When I uncheck bogon networks and remove the value of 'table entries' again: problem solved.
So for now what works for me as a temporary solution: don't increase table entries but use default values, don't enable bogon networks and don't enable pfBlocker.
A view of the CPU spikes on the master node. Around 9am I moved the table entries value to the default.
How that node looked the last days (updated on april 6 around 10am):
-
Any news on this?
The whole thing is a serious desaster and no minor problem in my opinion. -
I think that we did not get any news with this issue until the final 2.5.0 version (which will be released not earlier than in a year or maybe - two).
-
@Woodsomeister said in Increased Memory and CPU Spikes (causing latency/outage) with 2.4.5:
Any news on this?
The whole thing is a serious desaster and no minor problem in my opinion.Unfortunately the current "solution" is to stay with the 2.4.4 version until we see a stable version.
-
@Woodsomeister I agree. It's not a minor problem. Looks like at least all major virtual environments have this problem. Even physical machines with a larger network and/or ip-table setup.
-
It's not being ignored.
2-4-5-high-latency-and-packet-loss-not-in-a-vm
If I were a betting man I would bet it's a regression in pf in 11.3Stable. I don't see that getting priority from the FreeBSD project. 12.x is the priority. Hope I am wrong.
-
I can verify this is happening on my Hardware based system (XG-1541 in CARP setup purchased from Netgate)
I upgrade my standby to 2.4.5 and immediately saw the CPU spiking and bad ping results, so I did not update my primary. EDIT: as soon as I submitted this I had packet loss and 3-4 second pings from the standby, so I'm happy I didn't update the primary.
After updaing the pfBlocker-ng package the ping/cpu on the standby router has returned to normal, but I'm a bit wary to proceed with upgrading the primary.
Looking at the redmine case and referenced freebsd errata, I'm not sure if the errata is being reference as a cause or solution?
If it is a potential solution will an updated pfctl be made available?
Many thanks
EDIT: Also I noticed the cpu spikes/packet timeouts were causing the secondary firewall to promote itself so I had to disable CARP on it until this is sorted.