New 502 Bad Gateway
-
Hi, All
Is it just the pfBlockerNG DNSBL issue? Can I turn on the IPv4 blocks only? I have upgraded to 2.4.1 (ufs) already, pfBlockerNG is disabled due to the issue, but I really want to turn it on. Thanks.
-
Really dumb question I am sure, but if the WEB GUI is giving a Bad Gateway, is the only way to correct this with a clean install? I am not sure if SSH access was enabled but ssh to 192.168.1.1 and .254 times out.
If I restart the router with via the power cord, I still can't get past the bad gateway even for a second.
Am I doomed?
-
I've already posted it once, but a ZFS install cured all my issues. Even on 2.4.0 and pbng 2.1.2
I did a clean install with ZFS and am still getting this issue.
-
For me after 2.4.1-RELEASE (amd64) and pfBlockerNG 2.1.2_1 Finally no more err 502 or 504. Open VPN keep connections.
Sistem running for 2 Days 09 Hours 02 Minutes 23 Seconds. Before i had issues after 6-9 hrs.
Many thnx fo all.
Just to add that my system still works without issues for 6 Days 23 Hours 15 Minutes. If anyone needs any outputs i will provide, but just tell me what to do to create logs.
Cheers!
-
I disabled pfBlockerNG but I'm also getting this 502 Bad Gateway message.
When I log in via SSH it just shows:
pfSense -
And you can't do anything. However if I hit ctrl-z it drops to a working shell allowing me to reboot so I can get in via the GUI.
Hope this helps others who don't have easy access to their routers.
Steve
-
Hello,
still getting this issue after a few hours of uptime.
2.4.1-RELEASE (amd64) running on ufs
pfBlockerNG - 2.1.2_1
snort - 3.2.9.5_2 (newer one available)Just disabling the DNSBL helps to keep everything working. But thats not the sense to deactivate it.
pfblockerNG with disabled DNSBL runs fine. -
This has happened twice today so this is still an issue.
pfSense Netgate SG-4860
2.4.1-RELEASE (amd64)pfBlockerNG 2.1.2_1
I have attached the recommended Output File but was wondering if there is anything else that needs to be supplied to help?
[pfSense Output File_11-2-17.txt](/public/imported_attachments/1/pfSense Output File_11-2-17.txt)
-
Just to add to this thread, I can confirm that the above fix worked for me. I had this issue after pushing out the upgrade to 2.4 and followed the post above (I commented the lines out rather than deleting them). Since then it has been stable and all pfSense routers in my environment have stopped giving the bad gateway error.
After I commented out that block of code, I've been stable although I know it's just a bandaid for now. On one of my 8 devices, I've been pushing out the updates for pfblockerng and am still getting the Bad Gateway 502 nginx error. In turn, with all packages up to date, I've simply commented out the updated block of code and again it seems to be stable. I know this is not the fix, but at least I'm not having to reboot the gateway router 1-2x a day.
Here is what I commented out:
File: /usr/local/www/pfblockerng/www/index.php
// Increment DNSBL Alias counter /*if (!empty($pfb_query)) { * $pfb_found = FALSE; * * $dnsbl_info = '/var/db/pfblockerng/dnsbl_info'; * if (($handle = @fopen("{$dnsbl_info}", 'r')) !== FALSE) { * $lock_handle = @try_lock($handle, 5); * if ($lock_handle) { * if (($pfb_output = @fopen("{$dnsbl_info}.bk", 'w')) !== FALSE) { * $lock_pfb_output = @try_lock($pfb_output, 5); * if ($lock_pfb_output) { * $pfb_found = TRUE; * * // Find line with corresponding DNSBL Aliasname * while (($line = @fgetcsv($handle)) !== FALSE) { * if ($line[0] == $pfb_query) { * $line[3] += 1; * } * @fputcsv($pfb_output, $line); * } * @unlock($lock_pfb_output); * } * @unlock_force($pfb_output); * @fclose($pfb_output); * } * @unlock($lock_handle); * } * @unlock_force($handle); * @fclose($handle); * } * * if ($pfb_found) { * @rename("{$dnsbl_info}.bk", "{$dnsbl_info}"); * } *} */
I'll check back Monday to see if there are any updates! Have a nice weekend everyone!
-
I made some additional mods to the code. Run the following command to download the patched version from my Github Gist:
fetch -o /usr/local/pkg/pfblockerng/pfblockerng.inc "https://gist.githubusercontent.com/BBcan177/7ff15715be0f02afdbe0a00c676aedce/raw"
Recommend a reboot after downloading the patch.
Please let me know your feedback!
I installed this today and after 6 hours of running my pFsense VM increased disk usage of over 20gb and crashed the VM and needed to be rebuilt.
Works on my machines since 4 days without a hassle and without filling up the disks.
I did this too. Everything is on the latest release. pfBlockerNG seems to be working fine. Ok it's just an hour ago however it is -up to now- one hour without issues. It looks a bit like stable. I'll give you a feedback if there is any change.
-
This is still happening to me on 2.4.1 and the latest PfBlocker. Took 8 days from reboot for the 502's to start and all SSH connections to fail, and approx 1 more day after that for all traffic to be dropped. Needed to get it back asap so don't have logs.
-
This is still happening to me on 2.4.1 and the latest PfBlocker. Took 8 days from reboot for the 502's to start and all SSH connections to fail, and approx 1 more day after that for all traffic to be dropped. Needed to get it back asap so don't have logs.
I can confirm too. Exactly the same happens here :-(
-
Ok… don't know if this is luck and I'll be jinxing it with this post but after battling this for weeks (on both UFS and ZFS) I decided to alter my CRON jobs such that all recurring tasks would be assured to have a minimum of 5 minutes. Since doing that, I've gone over 7 days without a hitch for the first time in over a month.
-
This is more of an info post to help try and sort out the issue.
I also had the Bad Gateway error after the 2.4.0 and 2.4.1 updates. pfBlockerNG is installed and running GeoIP and DNSBL parts only, with some periodic updates (essentially Pi-Hole). The pfsense system runs in a VM on XenServer (7.1, I believe).
What I found interesting was that I'm monitoring the firewall with Observium and the graphs are attached. (All of the same unit, same timeline, I just had to take 2 screenshots as the page is long.) Noting the graphs are 1 day / 7 days / 4 weeks / 1 year.
You can clearly see the 'spike' to crash/reboot time on the graphs, in both the running processes and the memory usage (etc)… the first spike is after the 2.4.0 install, with the 2.4.1 install coming immediately after the 'crash' of the 2.4.0 install. Then over a week running fine on 2.4.1... then processes ramp up again to crash point.
I could get to the console on the 2.4.1 box today but selecting 'reboot' from the console menu basically just hung the box... after 15mins it needed a 'force reboot' power cycle.
I'll be keeping a close eye on the firewall's health.. as well as this forum thread.
Happy to try and help debug this issue. It seems to me that something is 'triggering' the process madness and that doesn't seem to be a change (in my case) as the system ran for over a week without any involvement from me.
![Screen Shot 2017-11-07 at 8.31.55 pm.png](/public/imported_attachments/1/Screen Shot 2017-11-07 at 8.31.55 pm.png)
![Screen Shot 2017-11-07 at 8.31.55 pm.png_thumb](/public/imported_attachments/1/Screen Shot 2017-11-07 at 8.31.55 pm.png_thumb)
![Screen Shot 2017-11-07 at 8.32.15 pm.png](/public/imported_attachments/1/Screen Shot 2017-11-07 at 8.32.15 pm.png)
![Screen Shot 2017-11-07 at 8.32.15 pm.png_thumb](/public/imported_attachments/1/Screen Shot 2017-11-07 at 8.32.15 pm.png_thumb) -
Happening here too.
After upgrading to 2.4.1 I cannot access the admin interface locally or with ssh, the text "pfSense - Serial: 0123456789 - " is presented and any command is not interpreted, the options are not displayed too.
I cannot access via http, the message "502 bad gateway" is displayed (I know this is already mentioned in other messages.)
With Zabbix I can list other details and, excluding the console and web interfaces, everything seems running fine.
The packages pfBlocker (with DNSBL) and Snort are installed and running. The box is a supermicro 5015mt with 8G ram and two 80G drives (mirror geom)
-
Hi,
confirming SimonSAUs observation about the amount of processes.
nagios nrpe reporting for my pfsense box 59-84 processes in avg. But hitting 250-310 procs when the error occours. -
Well after 2 weeks without issues, it just bit me again.
Guess it's time to run pihole instead of pfblocker until this gets resolved.
-
Happening here too.
After upgrading to 2.4.1 I cannot access the admin interface locally or with ssh, the text "pfSense - Serial: 0123456789 - " is presented and any command is not interpreted, the options are not displayed too.
I cannot access via http, the message "502 bad gateway" is displayed (I know this is already mentioned in other messages.)
With Zabbix I can list other details and, excluding the console and web interfaces, everything seems running fine.
The packages pfBlocker (with DNSBL) and Snort are installed and running. The box is a supermicro 5015mt with 8G ram and two 80G drives (mirror geom)
Exactly the same is happening in 2.4.2 as well :( I'm running on a SG-8660 with 2.4.2, pfblocker, snort and squid. Running a week when "502 Bad Gateway" and can't ssh (can login but freeze after the serial number)
-
Yes, that was my feeling after seeing pfSense 'try' to reboot after it got to that state.
The reboot takes several times longer than usual and you can see how it tries to sync vnodes and it simply times out!This points to a big and important LOCK somewhere, or simply reaching max number of processes or running out of memory.
I am very surprised that this was not caught in testing: Many, many people run pfBlockerNG, Suricata/Snort and Squid. That should be a basic configuration to be tested.
Yes, it takes traffic and some time to manifest, but any decent QA dept. needs to have, beyond load producing tools, monitoring tools to watch for memory leaks and process status (I did SW QA a few years ago).I imagine the pfSense Team does have all that, but the facts are that after a few spotless releases, we come back to insufficient testing for some standard, widely-used, configurations.
I have customers to support and when they pay you for their network to be up and for everything to work as promised, the time you can spend chasing this stuff, both the releases, as the forums, is time that I can use for many other better things, and instead of me getting paid to test, fix, reboot or babysit the firewall, I would prefer for them to pay for a solution that somebody else already babysat and tested properly:
For not a lot of money ($200 to $800 a year), you can buy a different solution that can give you almost all the features than fSense (and some much better, like reporting, managed IPS, virus, ads/malware blocking):
Untangle, which I have used longer (since 2010) than pfSense (since 2012), has never, ever gave me these problems, actually, no issues at all, and their support, while I was a non-paying customer, was very good and really helped me when I had a VLAN question.Of course I will continue using pfSense, but probably not for a big enough customer that needs a 'bullet-proof' 24/7/365, no-excuses, solution.
My peace (and reputation) is worth more than the few hundred dollars I can make by baby-sitting a router…
This is more of an info post to help try and sort out the issue.
I also had the Bad Gateway error after the 2.4.0 and 2.4.1 updates. pfBlockerNG is installed and running GeoIP and DNSBL parts only, with some periodic updates (essentially Pi-Hole). The pfsense system runs in a VM on XenServer (7.1, I believe).
What I found interesting was that I'm monitoring the firewall with Observium and the graphs are attached. (All of the same unit, same timeline, I just had to take 2 screenshots as the page is long.) Noting the graphs are 1 day / 7 days / 4 weeks / 1 year.
You can clearly see the 'spike' to crash/reboot time on the graphs, in both the running processes and the memory usage (etc)… the first spike is after the 2.4.0 install, with the 2.4.1 install coming immediately after the 'crash' of the 2.4.0 install. Then over a week running fine on 2.4.1... then processes ramp up again to crash point.
I could get to the console on the 2.4.1 box today but selecting 'reboot' from the console menu basically just hung the box... after 15mins it needed a 'force reboot' power cycle.
I'll be keeping a close eye on the firewall's health.. as well as this forum thread.
Happy to try and help debug this issue. It seems to me that something is 'triggering' the process madness and that doesn't seem to be a change (in my case) as the system ran for over a week without any involvement from me.
-
This is still happening to me on 2.4.1 and the latest PfBlocker. Took 8 days from reboot for the 502's to start and all SSH connections to fail, and approx 1 more day after that for all traffic to be dropped. Needed to get it back asap so don't have logs.
Happened again late last night. This time got the logs requested
https://pastebin.com/GMZG8B6H
-
Happened again late last night. This time got the logs requested
https://pastebin.com/GMZG8B6HWhat strikes me as odd here (and maybe unrelated to pfBlocker) is the 182 running 'vnstat' processes.. A possible source would be from TrafficTotals package, can you confirm you have got that installed?