Introduction of NPt for IPv6 causes IPv4/IPv6 dual stack to prefer IPv4 on SG-1100



  • EDIT: Well, I think I've proven it. I set-up several subnets, some being allocated ULAs and using NPt out the WAN, and some using global addresses, all on the same router.

    The clients using global addresses all default to IPv6 on a browser test (http://ds.testmyipv6.com/) whereas the ULA clients going through NPt default to IPv4.

    If you switch subnets using the same client, the default IPv4/IPv6 browser choice changes.

    Is.. uh... this normal??

    [Original post below]

    I raised this in the IPv6 forum, but I thought I'd ask here whether anyone is aware of why a correctly functioning IPv6 implementation would suddenly prefer IPv4 after IPv6 ULAs and NPt are introduced?

    http://ds.testmyipv6.com/
    https://test-ipv6.com/

    Needless to say Advanced->Networking "prefer IPv4 to IPv6" is not checked.

    UPDATE: It might be because of a latency issue? This didn't show up before I put NPt in place, but I now get this:

    Pinging google.com [172.217.165.14] with 32 bytes of data:
    Reply from 172.217.165.14: bytes=32 time=16ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=14ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=12ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=11ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=64ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=12ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=14ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=14ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=13ms TTL=55
    Reply from 172.217.165.14: bytes=32 time=15ms TTL=55

    Ping statistics for 172.217.165.14:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
    Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 64ms, Average = 18ms

    Pinging google.com [2607:f8b0:400b:80f::200e] with 32 bytes of data:
    Reply from 2607:f8b0:400b:80f::200e: time=27ms
    Reply from 2607:f8b0:400b:80f::200e: time=17ms
    Reply from 2607:f8b0:400b:80f::200e: time=67ms
    Reply from 2607:f8b0:400b:80f::200e: time=85ms
    Reply from 2607:f8b0:400b:80f::200e: time=96ms
    Reply from 2607:f8b0:400b:80f::200e: time=18ms
    Reply from 2607:f8b0:400b:80f::200e: time=11ms
    Reply from 2607:f8b0:400b:80f::200e: time=16ms
    Reply from 2607:f8b0:400b:80f::200e: time=12ms
    Reply from 2607:f8b0:400b:80f::200e: time=12ms

    Ping statistics for 2607:f8b0:400b:80f::200e:
    Packets: Sent = 10, Received = 10, Lost = 0 (0% loss),
    Approximate round trip times in milli-seconds:
    Minimum = 11ms, Maximum = 96ms, Average = 36ms


  • LAYER 8 Netgate

    It is very possible that a client device would prefer IPv4 over IPv6 if all it has is a ULA address and the destination is GUA (or even outside the ULA /48).

    If there is a setting to behave differently, it would be a setting in the client's stack configuration.

    Needless to say Advanced->Networking "prefer IPv4 to IPv6" is not checked.

    That is for connections from the firewall itself, not connections through the firewall. The firewall cannot tell the client which to use. It makes a decision based on the status of its network stack. A setting such as that on the client is what I was talking about up there.


Log in to reply