An earnest appeal - please do fix APINGER in 2.2
-
Hi there,
There has been numerous posts in various places in the forum on apinger issues that started appearing mostly from 2.1.x. These issues still exist in 2.2 Alpha (as of snapshot of couple weeks back).
1. The main issue is that after some time (this period varies), the apinger shows wrong RTT and Loss numbers in the dashboard, or displays "Pending" and eventually excludes the gateway from gateway groups, while there's absolutely no actual RTT delay or packet loss in that WAN as tested by doing ping from any LAN client.
2. Restarting apinger doesn't help much. In some cases within a minute of getting restarted, it reports the wrong RTT and loss and goes to "Pending" and then marks gateway as down (while in default gateway's case, pfsense is still using that gateway for pf to Internet traffic).
3. The problem seems to be more prevalent for multi-WAN setups where the monitor IP is any Internet host with normal RTT of more than 50 ms. The problem lessens if the gateway IP is used as monitor IP (the default option) but then the value of testing whether the Internet connectivity is there or not diminishes.
4. There's a weird workaround that isn't practical for everyone: if you have more than one WANs, use another router (like a wi-fi router) between the Internet connection and pfsense. Doing this, surprisingly stabilizes the apinger and it doesn't drift away much. In addition to this work-around, one can use a cron job entry to automatically restart apinger every hour or so, to further avoid apinger's wrong behavior.
apinger is the basic mechanism required for smooth functioning of multi-WAN setups. So this is very critical that it works flawlessly. Many users are getting affected and not everyone is reporting. Many hours of productivity are being lost due to falsely making a gateway inactive. It should be fixed in 2.2.
Here's a list of forum postings regarding various apinger-related issues:
https://forum.pfsense.org/index.php?topic=68637
https://forum.pfsense.org/index.php?topic=66328
https://forum.pfsense.org/index.php?topic=69533
https://forum.pfsense.org/index.php?topic=74914
https://forum.pfsense.org/index.php?topic=72085
https://forum.pfsense.org/index.php?topic=72314
https://forum.pfsense.org/index.php?topic=76770
https://forum.pfsense.org/index.php?topic=77266
https://forum.pfsense.org/index.php?topic=73009
https://forum.pfsense.org/index.php?topic=70441
https://forum.pfsense.org/index.php?topic=72455
https://forum.pfsense.org/index.php?topic=73109
https://forum.pfsense.org/index.php?topic=72303
https://forum.pfsense.org/index.php?topic=69261
https://forum.pfsense.org/index.php?topic=71879
https://forum.pfsense.org/index.php?topic=67354
https://forum.pfsense.org/index.php?topic=68637
https://forum.pfsense.org/index.php?topic=65505
https://forum.pfsense.org/index.php?topic=63470.0Thanks,
msu -
(not being able to attach screenshot showing issue due to 500 internal server error). Here it is:
http://imgur.com/GCqD1CV
-
I second pubmsu.
I get more than 100 mails every day about these instable cable providers.
At some days I get more than 2000 mails. All send within 5 minutes. -
+1
This is a quite annoying issue. -
im also having this issue. i have 3 wans combined bandwidth but 2 of my gateways are always down. which is not. :(
-
They've had the gall to offer 4 general releases with this problem. It makes anything higher than 2.0.3 worthless for anything more serious than a simple home router.
Shame. This is where software ideas like this go to die.
-
The problem is that none of us can reproduce this on demand in an environment we control and can debug. Yes, we know some people have problems, but they don't affect the majority of users.
Multi-WAN works fine here, and for many others. Ermal has been working on fixing this up but it's been a long process since the exact parameters to reproduce the problem have never been clearly identified or replicated.
The notifications are a separate issue entirely, but that's not slated to be cleaned up on 2.2 but sometime afterward.
-
Thanks for your update jimp - we at least now know the difficulty in fixing it.
If you want, I can help you with access to an otherwise good multi-WAN test environment that has this issue, which is an exact replication of our production environment. You can let me know in PM if this will help.
We have been using pfSense for last 7 years I guess, and really need this to be resolved.
-
Getting access wouldn't help as much as definitively identifying the specific condition leading to the problem if possible (e.g. a latency over X for Y amount of time, or Z gateways with Q latency, etc)
Depending on how long it takes for the problem to repeat it could still be difficult for us to find time to watch it closely enough to find when the problem starts specifically.
-
Got it Jim. In our case though the problem starts within 5 to 10 minutes.
-
Seeing the quality graph for your gateways may help as well, with notes about where apinger was restarted and where the problem was first noticed.
I tried to artificially induce latency using one firewall in front of another and increasing the delay on a limiter for ICMP traffic. Each time I let it run for 15+ minutes at various latencies and then lifted the limiter. Each time it always bounced back to close to 0 for me, I never saw it get stuck, so there must be a few different factors at work making it get stuck over time for others.
-
Maybe this is something to consider: I never had any problems with my setup running NanoBSD for the last year. I switched to a full install recently (CF died, bought an SSD) and now I am seeing Packetloss steadily increasing for my HENet tunnel. 120% packetloss ATM, uptime of the firewall is 4 days. I am pinging the same IPv6 Host via Smokeping from a Linux host behind the pfSense GW and the graphs look a little different. This is on 2.1.4, not on 2.2.
-
I have this problem too. Apinger reports that my WAN connection keeps going up and down several times every hour. It started a few months ago. I have not switched ISPs or anything. I installed the latest snapshot (built on Mon Jul 28 12:22:20 CDT 2014) and still have the problem.
I do not use multiple WANs. Just one.
-
I had this same issue as well and ended up coding in the local/private cable modems IP address into the config (192.168.100.1) and that was the workaround I used. Doesnt do anything for monitoring the connection but it's not always bouncing the connection up and down.
-
When you say "the config," do you mean the "Monitor IP"? My config was monitoring the default gateway IP, which is on the cable modem and I still had the problem.
-
I don't think this is related to the main issue described here, but I have observed a similar behavior under high network load and while using the traffic shaper, because the ping probes are put on the default queue instead of the one specified by the floating rule on WAN that is supposed to handle the situation. Probably this happens because apinger starts before the firewall itself, since killing the related states makes them go into the correct queue immediately
-
Issue still exsits in recent bulids in my testing enviroments.
-
…and frequently results in tunnels (IPsec or openVPN) going down for no obvious reasons, except for apinger freakin' out.
I increased the times for apinger alarm significantly, that helps at least a little...
-
I dont have these problems at all running 40+ pfsenses….
I use traceroute to monitor the wanted IP upstream to decide if the GW is down.
All are stable currently running 0% packetloss..... No change from 2.0.X
I dont like the idea of monitoring other external hosts not in your upstream environment. That way you dont get a real picture of your GW status.
-
I dont have these problems at all running 40+ pfsenses….
What's your config for WAN interfaces? I see allot of people write that have problems but doesn't put configs to help troubleshoot the problem.
Some times i have the problem in my multi-wan interface (PPPoE only config user and pass and a ppp (LTE) WAN only config default number). Don't see the problem when i disconnect my ppp.