Multi-WAN gateway failover not switching back to tier 1 gw after back online
-
Hi again
Past week we've installed a new machine with 2.2.4 and two WAN with failover, and same problem. In this case we have to different LANs, and each one has one failover group with different order (one with wan1->wan2 and the other with wan2->wan1), and none of them redirect traffic to the main one when it's recovered (we have to do some change and Save, as yanakis says).
It's a new installation without anything strange. We run several tests in both directions, and I can confirm the problem exists. Never went back automatically to the recovered main wan.
We didn't find nothing new or more clues, it just doesn't work.
Regards
-
+1 with arcanos.
In fact, instead of attempting to troubleshoot this and failing, it would be better if someone that has this working, would post a complete series of screenshots showing their setup. Then we can all learn from a working environment.
-
Is PPPoE a common factor for those that don't work? Both my WANs are DHCP.
There's really nothing to it. Create a gateway group with a Tier 1 and Tier 2 with member down as the trigger level and policy route to it.
-
Correct, PPPoE is the default and DHCP is the failover.
-
Not PPPoE in my case. This last case are two cable connections with routers with NAT and DMZ pointing to the wan interface of pfsense. But I've seen the problem with DSL and cable in bridge mode.
-
On the first view I have the same issue but looking deeper I can see that my Gateway keeps really offline until I reapply the interface config page or reboot.
Short decription of config and behavior:
I have two gateways (1. fiber / 2. cable modem) with a routing group for balancing (tier 1 / tier 1). Both gateways are monitored against external DNS servers. The routing group is defined as gateway in FW rule. "Use sticky connection" on System-advanced-misc. is on.
At the beginning after reboot all works fine an traffic is distributed to both gateways with weight 1:4.
But after some minutes / hours always second gateway goes offline (100% package loss) and keeps this status until I reapply the interface config or reboot. It's not an apinger problem. The gateway is really broken. A ping from Diagnostic - ping with source of gateway doesn't work (100% loss). The cable modem isn't disconnect and it works if a plugin a notebook there. So Pfsense stops the gateway really and keep it broken. Even if I disconnect the lan wire and reconnect no reaction.
Same happens on my backup Pfsense which is running in CARP mode. There is no traffic load but GW stops also.
If I set routing group in redundant mode (GW 1 tier 1 / GW 2 tier 2 OR GW 2 tier 1 / GW 1 tier 2) then all work OK. The gateways keep online. Also after reconnection of wire the interface comes online again.
My estimation is that there must be something wrong with balancing gateways. But I need the capacity of both gateways.
-
you should put a working monitor ip for each interfaces like dns ip
-
Hello,
I'm facing the same problem. I've read those (with no solution)
I'm runing 2.3.1 wit 2 wans (1 cable/main and 1dsl-pppoe/secondary), 2 groups. Failover is working (trigger ok) but not switching back after weak connection is back at 100%.
Ready to send screenshots. Ask
logs:
Jun 16 13:49:20 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 9274us stddev 5829us loss 21%
Jun 16 13:49:42 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7586us stddev 4056us loss 15%
Jun 16 13:53:58 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 7362us stddev 4941us loss 21%
Jun 16 13:54:15 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 6543us stddev 3719us loss 19%
Jun 16 13:54:39 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 6692us stddev 3840us loss 21%
Jun 16 13:54:57 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 6644us stddev 3338us loss 15%
Jun 16 13:56:03 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 8839us stddev 5402us loss 21%
Jun 16 13:56:19 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 8292us stddev 4864us loss 19%
Jun 16 13:56:43 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 8431us stddev 5556us loss 22%
Jun 16 13:57:02 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7940us stddev 5158us loss 15%
Jun 16 13:58:35 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 12630us stddev 12111us loss 21%
Jun 16 13:58:53 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 8282us stddev 4592us loss 15%
Jun 16 13:59:21 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 8983us stddev 5856us loss 21%
Jun 16 13:59:32 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 8447us stddev 5473us loss 16%
Jun 16 13:59:58 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 8206us stddev 5630us loss 21%
Jun 16 14:00:11 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7373us stddev 4132us loss 14%
Jun 16 14:01:14 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 8049us stddev 4691us loss 21%
Jun 16 14:01:44 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7842us stddev 3865us loss 18%
Jun 16 14:01:47 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 7944us stddev 3892us loss 21%
Jun 16 14:02:18 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7717us stddev 3673us loss 12%
Jun 16 14:03:51 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 7952us stddev 4608us loss 21%
Jun 16 14:04:16 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7030us stddev 3415us loss 12%
Jun 16 14:04:28 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 7711us stddev 4555us loss 21%
Jun 16 14:04:56 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7538us stddev 4081us loss 14%
Jun 16 14:05:10 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 8504us stddev 5216us loss 21%
Jun 16 14:05:32 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 8245us stddev 4794us loss 13%
Jun 16 14:05:51 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 8544us stddev 5200us loss 21%
Jun 16 14:06:14 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7369us stddev 3934us loss 16%
Jun 16 14:06:26 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 7862us stddev 4613us loss 21%
Jun 16 14:06:56 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 7151us stddev 3861us loss 13%
Jun 16 14:11:12 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 7910us stddev 4976us loss 21%
Jun 16 14:11:26 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 6648us stddev 3553us loss 15%
Jun 16 14:11:49 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 6910us stddev 4027us loss 21%
Jun 16 14:12:11 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 6271us stddev 2901us loss 15%
Jun 16 14:12:28 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 6705us stddev 3698us loss 21%
Jun 16 14:12:52 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 6371us stddev 2763us loss 11%
Jun 16 14:13:45 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Alarm latency 8346us stddev 5486us loss 21%
Jun 16 14:14:57 dpinger WAN2CABLEGW xxx.xxx.xxx.xxx: Clear latency 8065us stddev 5624us loss 16%
-
I would change the monitor IP in the WAN2CABLEGW to 8.8.8.8 or anything else that responds reliably and see if things improve. You can't expect any multi-WAN routing solution to perform with any semblance of continuity with flapping like that.
-
Actually, the monitor IP is default so it is the gateway of each wan.
I will give a try for a one like google (8.8.8.8) but the problem is not the monitor ip or the trigger, the problem is that when the deffect wan is back to normal (ping to monitor ip is better quality) the system do not switch back. I mean If I log in pfsense (hours after the problem), and watch my "gateway groups" status, they are all in green (same for gateways) but the system do not switch back to the favorite gateway.
I would change the monitor IP in the WAN2CABLEGW to 8.8.8.8 or anything else that responds reliably and see if things improve. You can't expect any multi-WAN routing solution to perform with any semblance of continuity with flapping like that.
-
No, the problem is your gateway is flapping about every minute due to packet loss to your monitor IP. If that was in my multi-wan group I would disable it until it was fixed. If that's "just the way it is" you will need to increase your monitoring threshholds and consider it up when it sucks like that.
-
OK my gateway has "troubles" for few minutes a day (not all the time).
That's precisely why I want a failover.
And this do not explain why the system do not go back to it's first wan after the first wan is seen by ths system in green. (hours after!)
If the goal of a failover is to work on connections that never have tropubles, it's non sense to me…
You have one or two Gateway Groups defined? The one with time stamp 02-25-44.
What you call "WANGROUP" is easier to handle when called " PPPoE 2 UPC"
Now you need an additional "UPC 2 PPPoE" group with reversed tiers.
Add another firewall rule for that one as well and it should work.And start with setting both "Trigger levels" to "Member Down".
-
I am pretty sure this is exactly related to my issue and my most recent detailed post here:
https://forum.pfsense.org/index.php?topic=86851.msg632594#msg632594
-
Hello JM,
Globally it seems to be the same problem reported by most people writing in your post. I've seen a bug report but dev team consider it is not a bug but misconfiguration without explaining where is the misconfiguration… strange
I am pretty sure this is exactly related to my issue and my most recent detailed post here:
https://forum.pfsense.org/index.php?topic=86851.msg632594#msg632594
-
It is not a bug.
A setting that kills all states on a Tier X interface when a Tier < X interface returns to service would be a feature request.
I did not see one for this on redmine.pfsense.org.
-
After many readings on this subject it is the first time I read that this is normal and this is a feature request. I've read that this was the result of missconfiguration meaning that connection should go back to what it was before failover…
For example:
https://redmine.pfsense.org/issues/5090Chris Buechler
…
I went through and re-tested multi-WAN in general on 2.2.5 (which is the same as 2.2.4 in that regard) and it fails over and back as it should just fine every time.
...
There may be some edge case but nothing here to suggest what that might be.BUT fiew lines later, it goes another way
Chris Buechler
…
that's how it's supposed to work at this point. Sounds like you want state killing on failback, which doesn't exist at this time. feature #855 covers thathttps://redmine.pfsense.org/issues/855
So the final answer is FAILOVER DO NOT GO BACK TO INITIAL STATE
This is suprising but knowing this, I stop loosing time trying different config options…It is not a bug.
A setting that kills all states on a Tier X interface when a Tier < X interface returns to service would be a feature request.
I did not see one for this on redmine.pfsense.org.
-
It is not a bug.
A setting that kills all states on a Tier X interface when a Tier < X interface returns to service would be a feature request.
I did not see one for this on redmine.pfsense.org.
Right, but if it's not a bug, then how do you get traffic to go back over the original interface when it returns online.
Killing the states does not always work.
I have also been able to test that a brand new device connected to the network, will still route in the same way (onto the failover interface) even if the primary wan was back online BEFORE the new device was connected.
I have also been testing this in a virtual environment and can replicate the issue. Although it is not always the same. Sometimes new states will follow the correct route (back over the primary wan) and other times they will get stuck on the backup wan. It is not consistent which doesn't make sense.
-
Let's be clear, to me it is a bug. But if they say no, I have no choice.
Actually, I reset all states and sometimes I change the firewall rule (time consuming!!!) If better proposition I'm interested.
-
Killing the states does not always work.
Please demonstrate with evidence.
-
@MrD:
I did not see one for this on redmine.pfsense.org.
https://redmine.pfsense.org/issues/855
So the final answer is FAILOVER DO NOT GO BACK TO INITIAL STATE
This is suprising but knowing this, I stop loosing time trying different config options…There. Feature #855. My redmine searching could obviously use a tuneup.