Apinger only working on wan 8/6/13 64bit snapshot
-
My (calculated) ping times growing, too.
My rrd graphs from last 6 month are less than 1 pixel. -
Same problem here.
Is it possible to cut out the effected part of the rrd graph?
-
I've a hard feeling my problem is related: https://redmine.pfsense.org/issues/3138
Multi wan is going fubbar since I switched from RC0 to RC1 a couple of days ago.I also confirm that I got the increasing of ping in a linear curve path.
-
Is it possible to cut out the effected part of the rrd graph?
You could export the RRD to XLM, edit the XML to re-set the values of the effected part of the graph. Then import the XLM back to RRD.
Export / Import RRD Database
/usr/local/bin/rrdtool dump rrddatabase xmldumpfile
/usr/local/bin/rrdtool restore -f xmldumpfile rrddatabase -
I had upgraded a multi-WAN site from 6 Aug 16:41:59 EDT 2013 to the latest snapshot yesterday (so I guess it would have been about a 12 Aug snapshot).
The 6 Aug snapshot was the one when apinger was added to the Services Status list, and apinger started counting up big numbers in the latency field. I was hoping that the later snap would fix everything.
The site was remote from me, and reported "no/intermittent internet". It did seem that OpenVPN links to it were coming and going. I couldn't get on to it long enough to see anything real. From the descriptions, it was probably constantly failing over from 1 gateway to the other and back, and/or thinking that both gateways were down…
I got them to switch slices and reboot, so it is back on 6 Aug snapshot. When I logged in just now the latency figures on the OPT! gateway were showing silly high numbers. I have disabled gateway monitoring on both gateways, and things have stabilised. For the moment, there will be no auto-failover at this site.
Unfortunately I can't give any better information, and for obvious reasons I don't want to roll forward at this site just now!
How are the apinger changes going? Do others have multi-WAN test systems that can be used as guinea-pigs? -
I have four gateways on three interfaces on a test VM and it was OK there, but they aren't "real" WANs.
Can you give any more information about your exact gateway config there?
-
WAN - DHCP, attached to a WiMax device that has its own private IP and NATs out to internet. (Gets an address 10.1.1.x from the WiMax DHCP server)
OPT1 - static private IP to a TP-Link ADSL router, which again NATs out to the real internet.WANGW - Monitor IP 8.8.8.8 - latency thresholds 4000 to 5000ms - packet loss thresholds 40 to 50% - probe interval 2 sec - down 30 sec.
OPT1GW - Monnitor IP 8.8.4.4 - latency thresholds 4000 to 5000ms - packet loss thresholds 40 to 50% - probe interval 2 sec - down 30 sec.
These connections have reasonably high latency normally, and when saturating the links with downloads the latency would normally go high, hence the wacky high gateway monitoring parameters to prevent gateways from being declared down when they are in fact "working".
Unfortunately I can't tell the exact symptoms, since it was a phone call and instructions about how to go back. The CF card multi-slice thing is very useful. As per previous post, I do know that links were coming and going, as I observed OpenVPN site-to-site links establishing for a minute or so, then dropping out.
I am at another site with multi-WAN at the moment. If I can gain a little confidence that apinger in the latest build is working OK and seems to be controlling failover OK, then I can upgrade here this evening and will be around to monitor it the next few days. This site is on a 31 Jul snap, which was before the recent apinger changes. So I will easily be able to switch back slices if needed. (I am not at home with a real test box)
-
I pulled up another VM that has a better multi-WAN config and it was still OK there.
Though when I was experiencing problems before the latest round of fixes, it was worse with high-latency gateways, so it's possible that the issue is compounded by the actual latency there. To reproduce it you may have to artificially induce the same level of latency.
-
I pulled up another VM that has a better multi-WAN config and it was still OK there.
Though when I was experiencing problems before the latest round of fixes, it was worse with high-latency gateways, so it's possible that the issue is compounded by the actual latency there. To reproduce it you may have to artificially induce the same level of latency.
Did you try to test failover?
As I state on this thread http://forum.pfsense.org/index.php/topic,65455.0.html, on RC1-20130812 failover does not work anymore (in my case).
Thanks
FV -
I have 2 pfsense with this.
1. pfsense:
2.1-RC1 (amd64)
built on Thu Aug 8 14:25:22 EDT 2013
FreeBSD 8.3-RELEASE-p9
1 WAN (0.4ms) (always green). The apinger shows 0ms which is wrong since update (pfsense1_WAN.png).
2 OpenVPN Server (23ms + 16ms) which have growing latencies. The corresponding clients at the other sides are green.2. pfsense:
2.1-RC1 (amd64)
built on Wed Aug 7 20:59:21 EDT 2013
FreeBSD 8.3-RELEASE-p9
2 WANs: static WAN (1.4ms) + DSL (22ms). The DSL has growing latency. WAN shows less latency (pfsense2_WAN.png).
2 OpenVPN Server. Both have growing latency.
1 OpenVPN Client which has growing latency, too.
-
Those snapshots are known to have apinger issues, upgrade to a current snapshot.
-
I forgot to write:
The LAN shows strange values, too. -
I pulled up another VM that has a better multi-WAN config and it was still OK there.
Though when I was experiencing problems before the latest round of fixes, it was worse with high-latency gateways, so it's possible that the issue is compounded by the actual latency there. To reproduce it you may have to artificially induce the same level of latency.
Did you try to test failover?
As I state on this thread http://forum.pfsense.org/index.php/topic,65455.0.html, on RC1-20130812 failover does not work anymore (in my case).
Thanks
FVIt does appear as though the filter reload at the end of the apinger event isn't doing what it should there. I'll need to run some more tests to narrow it down though.
-
I updated pfsense1.
While the first minutes a didn't see growing latencies.
But WAN still has 0ms in RRD and is less than real 0.4ms. -
The lack of failover working seems to be this:
http://redmine.pfsense.org/issues/3146 -
2.1-RC1 (i386)
built on Wed Aug 14 14:47:24 EDT 2013
FreeBSD 8.3-RELEASE-p9Looking good so far. Someone was downloading on our 1Mbps link speed for an hour or so. Latency went up to around 930ms. When the download finished the latency dropped back to under 200ms. The backup link latency is hovering around 300ms. During all this time there was no "panic" from apinger, check_reload_status or anything else to failover links.
At another site, latency on one link is changing in a range from 400 to 1100ms (people working it hard) and another 120ms (less used). apinger is coping fine.
-
After updating to the latest release
2.1-RC1 (amd64)
built on Thu Aug 15 03:12:29 EDT 2013
FreeBSD 8.3-RELEASE-p9the fast interfaces still shows 0ms instead of 0.400 on dashboard and RRD.
-
Hi
To ggzengel: have you "Disable Gateway Monitoring"?
-
After updating to the latest release
2.1-RC1 (amd64)
built on Thu Aug 15 03:12:29 EDT 2013
FreeBSD 8.3-RELEASE-p9the fast interfaces still shows 0ms instead of 0.400 on dashboard and RRD.
Same here, 2 of my 3 gateways show 0ms but they should show respectivly arround 14ms and 1ms.
My main gateway (Wan) is showing 1ms, which is correct. -
It's not exactly 0ms and it goes up on some interfaces.