Fat pipe to remote server: GRE tunnel -> LAGG in broadcast mode?!

  • I'm trying is to establish several GRE tunnels to a remote server and aggregate (Bond / LAGG) these so they appear as one fat pipe from pfSense to the remote server. The purpose is to create a stable link that is suitable for VoIP, which means that stable latency (minimal jitter) and minimal packet loss is more important than high bandwidth. This is why I would like to select the broadcast mode for the LAGG as this means that every package will be sent on all tunnels at the same time.

    The tunnels will run on different ADSL lines from different ISP's and in that way minimising the likelihood of a total breakdown due to pore internet connection between Vietnam and Europe.
    (Internet from Vietnam to Europe is terrible, but there is normally a way through via at least one ISP)

    Vietnam: pfSense with 4 ADSL lines from four different providers
    Europe: Linux Debian server running Asterisk

    GRE tunnels are working: I have two tunnels that work fine over two different ISP
    LAGG is not working at all for GRE.

    Is there any solution for this?

    How to do broadcast mode for LAGG/Bonding on pfSense?

    Will I receive a nice stream of packets (UDP) in the other end or will it be duplicates because every packet is sent in both tunnels?

    Any help, suggestions and ideas is more than welcome :)


  • LAYER 8 Global Moderator

    where did you get the idea that lagg was 1 fat pipe?  That is not how it works..  1+1 does not equal 2 in lagg/etherchannel/portchannel whatever term you want to use.

    if you have lots of different sessions you can load balance them over the lagg and reach 2..  But it does not turn the connection into a 2 connection, its still just 1 and 1.

    Not sure how you think bonding 4 connections in anyway is going to lower or minimize jitter or latency you are seeing.

  • Thanks for reply johnpoz!

    The idea is from VPN Bonding and link aggregation.

    If 2 lines are bonded they are still 2 lines, but the difference is that both lines appear to have the same IP. (Therefore its different than WAN loadbalancing)
    And as you say, 1+1 does not equal 2, which is correct because there are some overhead. However, 1+1 is more than 1 in therms of bandwidth.

    The normal mode of Bonding / aggregation aims at increased bandwidth and fault tolerance by distributing the packages on the available lines, typically roundrobin  or something smarter. This mode will not help to reduce packet loss or jitter.

    However, in Linux there is a less know mode called broadcast (mode 3) which send the same package on all the lines at the same time. I have not tried it, but my understanding is that the packages will be collected in the other end and only the package that arrive first will be forwarded to the target interface. If this is the way it works, then it would help to reduce jitter and packet loss:

    • The packet is not lost unless all lines are down.
    • Only the first packet is kept, which means that delayed packets are discarded. This would reduce jitter, because its not normal that packages suddenly arrive faster than normal. But if they for some reason would arrive too early, then the buffer in VoIP would handle it any way.

    I'm fairly sure about the bonding/aggregation idea related to bandwidth even though I have not yet tried it.
    The broadcast mode is my best guess based on very little documentation available.

    PS: Lagg / Bonding / Link aggregation has no meaning unless all links lead to the same target. LAGG in order to achieve higher internet speed in general. It will only work between two specified servers where bonding is configured in both ends.

  • LAYER 8 Global Moderator

    "- Only the first packet is kept, which means that delayed packets are discarded. "

    What exactly is going to remove the dupe packets?  In broadcast mode all packets are sent all interfaces in the lagg.  Where do you think only the first one get through? If you are using broadcast mode there would be NO increase in available bandwidth.. you would be limited to the bandwidth of 1 connection.  Your just sending packets down all pipes to make sure the packet gets there.

    This is going to be very dirty on the client side with all the dupes it will see. While this might make it so your max jitter is not very high..  It could make for out of order packets.  Sending the same packets through multiple paths makes for strange stuff!!  And the jitter will prob be all over the place on the client because you have different lines that would have different paths and different utilization use, etc.  Is this only going to be voip udp traffic?  You know what happens in tcp when it sees an out of order packet…  Can you say retrans ;)  Not something you want to generate for good bandwidth..

    if this was a good way to help with voip traffic it would be recommended all over the internet as a way to deal with crappy connections.  While I don't normally deal with voip - it has been a major topic at work as of late with 1 customer trying to do it over a vpn.  While we made drastic improvements with it they were running their udp voice inside a tcp ssl tunnel.. Which not really good for lowering jitter ;)

    Good luck but imho your not going to fix anything with voip putting it in a lagg no matter what mode you use.

  • "- Only the first packet is kept, which means that delayed packets are discarded. "

    What exactly is going to remove the dupe packets?

    That is exactly the part that I'm unsure about and will test. My logic behind it is that nobody would make the broadcast mode for no reason and the only reason I can see is to improve stability, but that will only happen if duplicate packages are discarded.

    Reordering is an issue especially for TCP, however this is an issue of the internet in general caused by jitter which among other things are caused by multiple paths to the same destination. If jitter is reduced, then reordering should also be reduced.
    Retrans is only happening if the packet is lost, or so delayed that TCP gives up waiting for it, thus this should also improve.

    However, if broadcast mode is not discarding duplicates then some other mechanism has to be used to achieve this. OpenVPN comes to mind as it can use UDP in transport layer and discards duplicates out of the box.

    What makes me daut the idea is what you say: "if this was a good way to help with voip traffic it would be recommended all over the internet as a way to deal with crappy connections. "
    So, yes I'm slightly too humble to think that this will be the holy grail of solutions, but I will try any way :)

    Maybe, what's stopped the "internet" from jumping on this solution is that it does require a server that we can control with a GOOD connection on the internet somewhere to use as the destination for the LAGG

    Anyway, the idea is being tested by bonding 2 OpenVPN connections using Debian with bonding mode = 3 (broadcast). If bonding does not discard the duplicates then we will try with one more OpenVPN tunnel through the bonding interface. (as OpenVPN can discard dublicates)

    My first goal is proof of concept… if it works, then a refined solution has to be worked out :)

Log in to reply