Apinger only working on wan 8/6/13 64bit snapshot
-
i had to roll back to 8/1/13 snap to fight with isp and 39% packet loss but anything that would help just let me know
-
I think what might be confusing, that apinger shows up in Services now. My test gateway has two gateway objects, each with monitoring disabled as I do not need this. apinger is down in Services. I'd say, that apinger probably does not have to run on that system.
cat /var/etc/apinger.conf # pfSense apinger configuration file. Automatically Generated! ## User and group the pinger should run as user "root" group "wheel" ## Mailer to use (default: "/usr/lib/sendmail -t") #mailer "/var/qmail/bin/qmail-inject" ## Location of the pid-file (default: "/var/run/apinger.pid") pid_file "/var/run/apinger.pid" ## Format of timestamp (%s macro) (default: "%b %d %H:%M:%S") #timestamp_format "%Y%m%d%H%M%S" status { ## File where the status information should be written to file "/var/run/apinger.status" ## Interval between file updates ## when 0 or not set, file is written only when SIGUSR1 is received interval 5s } ######################################## # RRDTool status gathering configuration # Interval between RRD updates rrd interval 60s; ## These parameters can be overridden in a specific alarm configuration alarm default { command on "/usr/local/sbin/pfSctl -c 'service reload dyndns %T' -c 'service reload ipsecdns' -c 'service reload openvpn %T' -c 'filter reload' " command off "/usr/local/sbin/pfSctl -c 'service reload dyndns %T' -c 'service reload ipsecdns' -c 'service reload openvpn %T' -c 'filter reload' " combine 10s } ## "Down" alarm definition. ## This alarm will be fired when target doesn't respond for 30 seconds. alarm down "down" { time 10s } ## "Delay" alarm definition. ## This alarm will be fired when responses are delayed more than 200ms ## it will be canceled, when the delay drops below 100ms alarm delay "delay" { delay_low 200ms delay_high 500ms } ## "Loss" alarm definition. ## This alarm will be fired when packet loss goes over 20% ## it will be canceled, when the loss drops below 10% alarm loss "loss" { percent_low 10 percent_high 20 } target default { ## How often the probe should be sent interval 1s ## How many replies should be used to compute average delay ## for controlling "delay" alarms avg_delay_samples 10 ## How many probes should be used to compute average loss avg_loss_samples 50 ## The delay (in samples) after which loss is computed ## without this delays larger than interval would be treated as loss avg_loss_delay_samples 20 ## Names of the alarms that may be generated for the target alarms "down","delay","loss" ## Location of the RRD #rrd file "/var/db/rrd/apinger-%t.rrd" }
-
on 8/1/13 snap
pfSense apinger configuration file. Automatically Generated!
User and group the pinger should run as
user "root"
group "wheel"Mailer to use (default: "/usr/lib/sendmail -t")
#mailer "/var/qmail/bin/qmail-inject"
Location of the pid-file (default: "/var/run/apinger.pid")
pid_file "/var/run/apinger.pid"
Format of timestamp (%s macro) (default: "%b %d %H:%M:%S")
#timestamp_format "%Y%m%d%H%M%S"
status {
File where the status information should be written to
file "/var/run/apinger.status"
Interval between file updates
when 0 or not set, file is written only when SIGUSR1 is received
interval 5s
}########################################
RRDTool status gathering configuration
Interval between RRD updates
rrd interval 60s;
These parameters can be overridden in a specific alarm configuration
alarm default {
command on "/usr/local/sbin/pfSctl -c 'service reload dyndns %T' -c 'service reload ipsecdns' -c 'service reload openvpn %T' -c 'filter reload' "
command off "/usr/local/sbin/pfSctl -c 'service reload dyndns %T' -c 'service reload ipsecdns' -c 'service reload openvpn %T' -c 'filter reload' "
combine 10s
}"Down" alarm definition.
This alarm will be fired when target doesn't respond for 30 seconds.
alarm down "down" {
time 10s
}"Delay" alarm definition.
This alarm will be fired when responses are delayed more than 200ms
it will be canceled, when the delay drops below 100ms
alarm delay "delay" {
delay_low 200ms
delay_high 500ms
}"Loss" alarm definition.
This alarm will be fired when packet loss goes over 20%
it will be canceled, when the loss drops below 10%
alarm loss "loss" {
percent_low 10
percent_high 20
}target default {
How often the probe should be sent
interval 1s
How many replies should be used to compute average delay
for controlling "delay" alarms
avg_delay_samples 10
How many probes should be used to compute average loss
avg_loss_samples 50
The delay (in samples) after which loss is computed
without this delays larger than interval would be treated as loss
avg_loss_delay_samples 20
Names of the alarms that may be generated for the target
alarms "down","delay","loss"
Location of the RRD
#rrd file "/var/db/rrd/apinger-%t.rrd"
}
alarm loss "CABLEMODEM_DHCPloss" {
percent_low 3
percent_high 15
}
alarm delay "CABLEMODEM_DHCPdelay" {
delay_low 60ms
delay_high 250ms
}
alarm down "CABLEMODEM_DHCPdown" {
time 15s
}
target "4.53.194.9" {
description "CABLEMODEM_DHCP"
srcip "209.105.187.12"
interval 3s
alarms override "CABLEMODEM_DHCPloss","CABLEMODEM_DHCPdelay","CABLEMODEM_DHCPdown";
rrd file "/var/db/rrd/CABLEMODEM_DHCP-quality.rrd"
}alarm loss "DSL_DHCPloss" {
percent_low 3
percent_high 15
}
alarm delay "DSL_DHCPdelay" {
delay_low 60ms
delay_high 250ms
}
alarm down "DSL_DHCPdown" {
time 15s
}
target "4.69.136.185" {
description "DSL_DHCP"
srcip "192.168.254.1"
interval 3s
alarms override "DSL_DHCPloss","DSL_DHCPdelay","DSL_DHCPdown";
rrd file "/var/db/rrd/DSL_DHCP-quality.rrd"
}alarm loss "OPT2GWv6loss" {
percent_low 5
percent_high 10
}
alarm delay "OPT2GWv6delay" {
delay_low 60ms
delay_high 250ms
}
alarm down "OPT2GWv6down" {
time 100s
}
target "2620:0:ccd::2" {
description "OPT2GWv6"
srcip "2001:470:7b:15e::2"
interval 20s
alarms override "OPT2GWv6loss","OPT2GWv6delay","OPT2GWv6down";
rrd file "/var/db/rrd/OPT2GWv6-quality.rrd"
} -
On the gateway status widget, the WANGW first gives a crazy high latency value (screenshot attached), then a few seconds later it goes to "pending". OPT1 always shows 0ms. Ping from a LAN client routing through either interface gives normal values.
2.1-RC1 (i386)
built on Tue Aug 6 16:41:59 EDT 2013
FreeBSD 8.3-RELEASE-p9
apinger.conf:# pfSense apinger configuration file. Automatically Generated! ## User and group the pinger should run as user "root" group "wheel" ## Mailer to use (default: "/usr/lib/sendmail -t") #mailer "/var/qmail/bin/qmail-inject" ## Location of the pid-file (default: "/var/run/apinger.pid") pid_file "/var/run/apinger.pid" ## Format of timestamp (%s macro) (default: "%b %d %H:%M:%S") #timestamp_format "%Y%m%d%H%M%S" status { ## File where the status information should be written to file "/var/run/apinger.status" ## Interval between file updates ## when 0 or not set, file is written only when SIGUSR1 is received interval 5s } ######################################## # RRDTool status gathering configuration # Interval between RRD updates rrd interval 60s; ## These parameters can be overridden in a specific alarm configuration alarm default { command on "/usr/local/sbin/pfSctl -c 'service reload dyndns %T' -c 'service reload ipsecdns' -c 'service reload openvpn %T' -c 'filter reload' " command off "/usr/local/sbin/pfSctl -c 'service reload dyndns %T' -c 'service reload ipsecdns' -c 'service reload openvpn %T' -c 'filter reload' " combine 10s } ## "Down" alarm definition. ## This alarm will be fired when target doesn't respond for 30 seconds. alarm down "down" { time 10s } ## "Delay" alarm definition. ## This alarm will be fired when responses are delayed more than 200ms ## it will be canceled, when the delay drops below 100ms alarm delay "delay" { delay_low 200ms delay_high 500ms } ## "Loss" alarm definition. ## This alarm will be fired when packet loss goes over 20% ## it will be canceled, when the loss drops below 10% alarm loss "loss" { percent_low 10 percent_high 20 } target default { ## How often the probe should be sent interval 1s ## How many replies should be used to compute average delay ## for controlling "delay" alarms avg_delay_samples 10 ## How many probes should be used to compute average loss avg_loss_samples 50 ## The delay (in samples) after which loss is computed ## without this delays larger than interval would be treated as loss avg_loss_delay_samples 20 ## Names of the alarms that may be generated for the target alarms "down","delay","loss" ## Location of the RRD #rrd file "/var/db/rrd/apinger-%t.rrd" } alarm loss "WANGWloss" { percent_low 40 percent_high 50 } alarm delay "WANGWdelay" { delay_low 4000ms delay_high 5000ms } alarm down "WANGWdown" { time 30s } target "8.8.4.4" { description "WANGW" srcip "10.49.82.1" interval 2s alarms override "WANGWloss","WANGWdelay","WANGWdown"; rrd file "/var/db/rrd/WANGW-quality.rrd" } alarm loss "OPT1GWloss" { percent_low 40 percent_high 50 } alarm delay "OPT1GWdelay" { delay_low 4000ms delay_high 5000ms } alarm down "OPT1GWdown" { time 30s } target "8.8.8.8" { description "OPT1GW" srcip "10.49.81.1" interval 2s alarms override "OPT1GWloss","OPT1GWdelay","OPT1GWdown"; rrd file "/var/db/rrd/OPT1GW-quality.rrd" }
-
Working fine here.
Try stop and restart apinger.
-
I stopped/restarted apinger on both systems that I had upgraded. The gateway status widget latencies are now showing fine. I had upgraded 3 systems to Aug 6 snapshot - the 2 with multi-gateways had these symptoms. The 1 with only 1 gateway did not have a problem. Not a big enough sample size to decide if multiple gateways being monitored is the real trigger for the "feature". Will report tomorrow if I see the latency numbers go silly again.
-
I had upgraded 3 systems to Aug 6 snapshot - the 2 with multi-gateways had these symptoms. The 1 with only 1 gateway did not have a problem. Not a big enough sample size to decide if multiple gateways being monitored is the real trigger for the "feature". Will report tomorrow if I see the latency numbers go silly again.
Well, if IPv6 tunnel counts as multigateway, then count me in.
-
It seems to be that the longer the delay to the gateway, the more likely there is to be a problem compounded over time.
I set one of my gateways in a VM last night to 8.8.8.8 and within an hour it was into the thousands of ms in delays when in reality it was ~50ms.
There is also still an issue with changing monitor IPs requiring a manual restart of apinger.
-
It seems to be that the longer the delay to the gateway, the more likely there is to be a problem compounded over time.
I set one of my gateways in a VM last night to 8.8.8.8 and within an hour it was into the thousands of ms in delays when in reality it was ~50ms.Pretty much same here. If I really use the real GW, it does not happen. However monitoring the real GW is rather useless for me, I need to monitor real internet connectivity, not a device a couple of meters away from the firewall.
-
we have the same problem. ping default gateway normal (1.3ms). when we ping the next hop gateway, we have a ping over 600ms). a pink from terminal show a ping of 1.4ms. after a restart of apinger the values correct again.
-
hi we have now used the gateway ip for monitoring and we have the problem on the backup firewall, too. the ms stacked up from time to time. it start with 1ms and after some hours the apinger is over 2000ms. after restart apinger, it start with 1ms and from time to time the value was higher.
-
My (calculated) ping times growing, too.
My rrd graphs from last 6 month are less than 1 pixel. -
Same problem here.
Is it possible to cut out the effected part of the rrd graph?
-
I've a hard feeling my problem is related: https://redmine.pfsense.org/issues/3138
Multi wan is going fubbar since I switched from RC0 to RC1 a couple of days ago.I also confirm that I got the increasing of ping in a linear curve path.
-
Is it possible to cut out the effected part of the rrd graph?
You could export the RRD to XLM, edit the XML to re-set the values of the effected part of the graph. Then import the XLM back to RRD.
Export / Import RRD Database
/usr/local/bin/rrdtool dump rrddatabase xmldumpfile
/usr/local/bin/rrdtool restore -f xmldumpfile rrddatabase -
I had upgraded a multi-WAN site from 6 Aug 16:41:59 EDT 2013 to the latest snapshot yesterday (so I guess it would have been about a 12 Aug snapshot).
The 6 Aug snapshot was the one when apinger was added to the Services Status list, and apinger started counting up big numbers in the latency field. I was hoping that the later snap would fix everything.
The site was remote from me, and reported "no/intermittent internet". It did seem that OpenVPN links to it were coming and going. I couldn't get on to it long enough to see anything real. From the descriptions, it was probably constantly failing over from 1 gateway to the other and back, and/or thinking that both gateways were down…
I got them to switch slices and reboot, so it is back on 6 Aug snapshot. When I logged in just now the latency figures on the OPT! gateway were showing silly high numbers. I have disabled gateway monitoring on both gateways, and things have stabilised. For the moment, there will be no auto-failover at this site.
Unfortunately I can't give any better information, and for obvious reasons I don't want to roll forward at this site just now!
How are the apinger changes going? Do others have multi-WAN test systems that can be used as guinea-pigs? -
I have four gateways on three interfaces on a test VM and it was OK there, but they aren't "real" WANs.
Can you give any more information about your exact gateway config there?
-
WAN - DHCP, attached to a WiMax device that has its own private IP and NATs out to internet. (Gets an address 10.1.1.x from the WiMax DHCP server)
OPT1 - static private IP to a TP-Link ADSL router, which again NATs out to the real internet.WANGW - Monitor IP 8.8.8.8 - latency thresholds 4000 to 5000ms - packet loss thresholds 40 to 50% - probe interval 2 sec - down 30 sec.
OPT1GW - Monnitor IP 8.8.4.4 - latency thresholds 4000 to 5000ms - packet loss thresholds 40 to 50% - probe interval 2 sec - down 30 sec.
These connections have reasonably high latency normally, and when saturating the links with downloads the latency would normally go high, hence the wacky high gateway monitoring parameters to prevent gateways from being declared down when they are in fact "working".
Unfortunately I can't tell the exact symptoms, since it was a phone call and instructions about how to go back. The CF card multi-slice thing is very useful. As per previous post, I do know that links were coming and going, as I observed OpenVPN site-to-site links establishing for a minute or so, then dropping out.
I am at another site with multi-WAN at the moment. If I can gain a little confidence that apinger in the latest build is working OK and seems to be controlling failover OK, then I can upgrade here this evening and will be around to monitor it the next few days. This site is on a 31 Jul snap, which was before the recent apinger changes. So I will easily be able to switch back slices if needed. (I am not at home with a real test box)
-
I pulled up another VM that has a better multi-WAN config and it was still OK there.
Though when I was experiencing problems before the latest round of fixes, it was worse with high-latency gateways, so it's possible that the issue is compounded by the actual latency there. To reproduce it you may have to artificially induce the same level of latency.
-
I pulled up another VM that has a better multi-WAN config and it was still OK there.
Though when I was experiencing problems before the latest round of fixes, it was worse with high-latency gateways, so it's possible that the issue is compounded by the actual latency there. To reproduce it you may have to artificially induce the same level of latency.
Did you try to test failover?
As I state on this thread http://forum.pfsense.org/index.php/topic,65455.0.html, on RC1-20130812 failover does not work anymore (in my case).
Thanks
FV