IPv6 Performance Hit Following Update



  • So, as you can see in the attached image, IPv6 isn't doing all that great right now.

    Updated yesterday, all seemed fine, performance over IPv4/6 were both fine, but after running for a little while it suddenly took a dive and stayed that way.




  • As likely to be an apinger problem as a true IPV6 issue…

    Why don't you open a command tool and do the pings manually and see if the latency in a command tool is the same as apinger shows.



  • Check the ping output from Diag>Ping, how does that compare to Status>Gateways?



  • Everything seems relatively fine, ping through diagnostics looks normal (both IPv4/6), though I was noticing some significant connectivity issues earlier.

    Don't have much inclination as to what it could be, as the issue was very sudden, and seemingly unprovoked (it worked perfectly all day until this issue).



  • UPDATE:

    Definitely something to do with apinger. Did a restart on it, then everything showed up normal again for a while, and once again suddenly (and randomly) showed incorrect latency.

    Though this time it was over both IPv4/6.



  • Banned

    apinger => lost cause.



  • apinger graphs are pretty…  Meaningless, but pretty.  Were they always a mess like this or is it just in the last few years?



  • I find it useful when it's working.

    Being able to consistently monitor for packet loss is what I want from it.

    In regards to my current issue, ended up changing the monitor addresses to Google's IPv4/6 DNS servers, then reduced polling to two seconds while testing. Will post back on how well that performs, and for how long.



  • I set mine for 10 seconds and raised the threshholds for being considered "down"

    It does seem more reliable this way.



  • Update: The apinger setting changes I last mentioned in my previous post seemed to have resolved the issue for me (so far).



  • Attenuating the default. For me will do:
    [System: Gateways: Edit gateway] WAN_DHCP6; Probe Interval=8; Down=32;


Log in to reply