New TCP congestion algorithm - BBR



  • https://patchwork.ozlabs.org/patch/671069/

    Google has been using it, lowers latency, increases throughput, works with modern AQMs, does not get starved by other algorithms in use. What's not to like? Best yet, does not require the receiver to interact in anyway special to get full benefit. Just upgrade your servers and watch the Internet be a nicer place for all.



  • What's not to like?

    • it probably injects ads into your tcp steam  :D
    • the patch appears to be for linux
    • unclear (for me) that is would have any gain to run this on a router


  • The patch may be for Linux, but the algorithm could be implemented by anyone, assuming they don't lay claim to a patent.

    It wouldn't help a router/firewall, but it would be useful for any servers.



  • I saw this post over on HackerNews and thought you might be interested, Harvy. The poster is a FreeBSD kernel dev. https://news.ycombinator.com/item?id=12681091

    (Regarding TCP improvements in FreeBSD 11.)

    It matters most for people doing 10-100gbps throughput, CPU usage will be lower and more stable in all cases though.
    There has been a lot of improvement to many network card drivers in 11, and I am helping to push/fund the final integration of Matt Macy's "iflib" for the common intel em/igb/ixgbe drivers.
    There are a lot of goodput improvements coming soon, which will affect all TCP users. I had Matt Macy upgrade TCP CUBIC to match 2016 RFC and most Linux behaviors (HyStart). Hiren Panchasara has been working full time for almost 2 years to address many other goodput and correctness issues in the TCP stack. Some of these are in 11, but the majority will hit in 11.1.
    Another company is working on the recently announced BBR congestion control from Google and a TCP stack with RACK/PRR https://wiki.freebsd.org/DevSummit/201606/Transport. The end result of all this will be a more tightly integrated and coherent TCP implementation, which should make FreeBSD have the best network stack again in 2017 after falling behind for a while.



  • How start the BBR



  • Fujitsu Labors was doing it in 2013 too, perhaps not the same but 30 times faster then the ordinary TCP
    protocol we are using until today, Fujitsu Laboratories Ltd. - Press release



  • @BlueKobold:

    Fujitsu Labors was doing it in 2013 too, perhaps not the same but 30 times faster then the ordinary TCP
    protocol we are using until today, Fujitsu Laboratories Ltd. - Press release

    That's an interesting read, but it seems Fujitsu's tech was a tunneling tech that wrapped around TCP between two tunneled networks and acted similar to a TCP accelerator by reducing ACK latency, compensating for packetloss, and allowing smaller transmission windowed TCP connections from being limited to bandwidth by high RTTs.



  • now i using BBR in unbuntu17.10, it is work now.

    i want to change the pfsense to BBR or cubic, it is can't change.

    i add cc_cubic_load="YES"  in loader.conf  and net.inet.tcp.cc.algorithm=cubic  in sysctl.conf

    Shell Output - sysctl net.inet.tcp.cc.available  show net.inet.tcp.cc.available: newreno

    still can't change it.

    who know how change it?



  • I know I'm late to the party, but I actually found out about this algorithm just recently as I was searching for network settings to tune for Linux hosts.

    Ran some tests using TCP BBR and I have to say I'm quite impressed with the performance:

    1. Performing a local test using Flent between two 10Gbit Linux hosts using TCP BRR and sitting on different network segments (i.e. the test was done across the firewall) resulted in more stable data transfer and lower latency. Using TCP - BBR I had no trouble pushing 14 - 16Gbit of traffic across the pfSense firewall (Flent is a bi-directional test) with latencies on average between 1 - 2 ms during the test. Using the prior (default) TCP congestion algorithm (Cubic) data transfer was less stable (more variability in bandwidth) and total bandwidth was a little lower as well. Latencies were closer to the 3 - 6ms range.

    2. Performing a WAN test I also got better upload performance than before. I have a 1Gbit symmetric Fiber connection and using TCP BBR I saw higher upload speeds, especially over longer distances (e.g. between East Coast and West Coast). I use fq_codel to manage WAN traffic since I have 10Gbit hosts sending traffic into a 1Gbit interface -- it all seems to work quite well still with TCP BBR enabled on the hosts.



  • @yon Google is working on BBR2. Lots of improvements in making it both more friendly and more resilient.



  • @harvy66 said in New TCP congestion algorithm - BBR:

    @yon Google is working on BBR2. Lots of improvements in making it both more friendly and more resilient.

    Hi @Harvy66 - any ideas what specifically they are working on changing/updating? Thanks in advance.


 

© Copyright 2002 - 2018 Rubicon Communications, LLC | Privacy Policy