Playing with fq_codel in 2.4
-
What is the consensus of using the "Weight" option multiple child queues under each limiter to set priority?
ie.
downloadHigh -> Weight = 60
downloadDefault -> Weight = 30
downloadLow -> Weight = 10uploadHigh -> Weight = 60
uploadDefault -> Weight = 30
uploadLow -> Weight = 10 -
Hi @bjsmith - I have been using a setup similar to this for some time to guarantee bandwidth across different network segments. From the basic testing that I have done, it seems to be working fine.
Hi @dtaht - it's great to see you joining the discussion! I have read quite a few your posts and writings, especially over at bufferbloat.net, which has been an invaluable resource. Thank you so much for all your contributions to this effort, it truly is appreciated.
I first ran into the bufferbloat problem a couple years ago when I had a 150/150 symmetric fiber connection and saw that the upload developed bloat while running the DSL Reports speed test (which at the time actually really surprised me). After a lot of research and trial and error, I initially addressed the issue with the ALTQ FAIRQ scheduler and Codel AQM in pfSense, but eventually just switched to fq_codel as support was added in later pfSense versions.
I still use fq_codel today although on a now faster 1Gbit symmetric fiber connection - even at high WAN speeds bufferbloat can still become an issue. These days my LAN also runs at 10Gbit and feeds into a slower 1Gbit WAN link - using fq_codel keeps everything well behaved. More recently I discovered the Google developed TCP BBR congestion control algorithm. From the basic testing that I have done, the algorithm works great alongside fq_codel, and has shown significant improvements on the upload of my WAN connection, especially over longer distances.
Flent has been an invaluable network analysis tool while performing these experiments and has become my goto application for testing networking performance after making changes/optimizations/tweaks etc. I have used it for testing both WAN performance as well as LAN performance at 10Gbit across the pfSense firewall (for instance, with two TCP BBR enabled 10Gbit Linux hosts it has resulted in some really pretty charts).
I won't lie - all this optimization and tweaking can get a bit addictive, but having a reliable and stable connection makes it well worth it.
-
I'm glad that our efforts have been so useful for you! Going back to my longer post above, I ended up adding 50+ people to my credits list that will end up as a blog entry. I would rather like to fix about 2B routers and the only way to do that is encourage those that get the benefits of fixing bufferbloat to help get rid of it for others. There's more than a few business models for that... but if you could tell me you'd also gone and fixed it for a friend, upgraded your local coffee shop, helped a non-profit or small business out of the bloat-hole, it would cheer me up more. Doing a preso for a user group, nagging your ISP, are also things worth doing. Posting more before/after results from flent, etc.
As usual, I'm teetering on the edge of burnout. I promised a few folk that I'd stick with this cause until deployment was underweigh with correct code, and we're very close to that now. I look forward to one day leveraging all this low latency stuff as a default with applications that can take advantage of it - imagine quality 48khz stereo voice running at 2.7ms sample rates for example, or VR that didn't make your head spin because it runs at 4ms (240 frames/sec).
-
@tman222 - also, I'd like more folk to start posting "debloat events/day". Codel kicks in rarely, but I'm pretty sure every time it does it saves on a bit of emotional upset and jitter for the user. For example I get about 3000 drops/ecn marks a day on my inbound campus link (about 12000 on the wifi links), and outbound a 100 or so. Every one of those comforts me 'cause I feel like I'm saving a ~500ms latency incursion for all the users of the link.
I'm kind of curious how many drops/marks a day your 10gigE/1gig link has.
-
@dtaht said in Playing with fq_codel in 2.4:
@tman222 - also, I'd like more folk to start posting "debloat events/day". Codel kicks in rarely, but I'm pretty sure every time it does it saves on a bit of emotional upset and jitter for the user. For example I get about 3000 drops/ecn marks a day on my inbound campus link (about 12000 on the wifi links), and outbound a 100 or so. Every one of those comforts me 'cause I feel like I'm saving a ~500ms latency incursion for all the users of the link.
I'm kind of curious how many drops/marks a day your 10gigE/1gig link has.
Thanks @dtaht - I'd be happy to share that info. What's the best way to collect that data under FreeBSD? I'm assuming you are referring to the fq_codel packet drops over a 24hr period instead of interface drops. Thanks again.
-
Heh. I don't know. I tried to fire up bsd once recently and found it entirely alien.
In linux it's
tc -s qdisc show dev whatever
They also show up as interface drops, so I use mrtg to track it. I don't have ecn marks though, in my snmp mib. That's growing important.
If you figure it out (or it's a missing facility) let us know!
-
I note also flent can collect a lot of stats along the way on it's tests - cpu usage, qdisc stuff, etc - but we have not a lot of bsd support in there. see flent/flent/tests/*.inc
-
Another test in flent I've started using more of late is the "square wave" tests. These "speak" to those with more of an EE background. There's one that directly compares bbr and cubic in particular. Most people find the rrul test overwhelming at first. I used it on a recent preso to broadcom: http://flent-fremont.bufferbloat.net/~d/broadcom_aug9.pdf
-
@dtaht said in Playing with fq_codel in 2.4:
Heh. I don't know. I tried to fire up bsd once recently and found it entirely alien.
In linux it's
tc -s qdisc show dev whatever
They also show up as interface drops, so I use mrtg to track it. I don't have ecn marks though, in my snmp mib. That's growing important.
If you figure it out (or it's a missing facility) let us know!
Thanks @dtaht - I do know that one can see the statistics of fq_codel with the "ipfw sched show" command, but I'm not sure how to go about logging that for a 24 hour period. If anyone else here has an idea, please do let me know. Maybe the logging functionality is something that still needs to be added?
@dtaht said in Playing with fq_codel in 2.4:
I note also flent can collect a lot of stats along the way on it's tests - cpu usage, qdisc stuff, etc - but we have not a lot of bsd support in there. see flent/flent/tests/*.inc
So I do run the Flent application through Linux (Debian Stretch) and I see those different test options through the front UI. The only thing I can't seem to figure out is how to add parameters to the test through the UI -- maybe to do that I have to run from the command line.
@dtaht said in Playing with fq_codel in 2.4:
Another test in flent I've started using more of late is the "square wave" tests. These "speak" to those with more of an EE background. There's one that directly compares bbr and cubic in particular. Most people find the rrul test overwhelming at first. I used it on a recent preso to broadcom: http://flent-fremont.bufferbloat.net/~d/broadcom_aug9.pdf
Thanks @dtaht for this additional info. I didn't quite know which test you meant before I saw the presentation :). I ran the TCP 4 Up Squarewave test as well and got some interesting results. Using fq_codel on the WAN link and netperf.bufferbloat.net as the testing server on the other side, it doesn't look nearly as nice as what you have in your slides (in fact it looks closer what you had on slide 36). It's somewhat similar on a local 10Gbit to 10Gbit test (although no fq_codel enabled there): Had to turn the IDS/IPS off on the interfaces and the flows started to behave more, but still not quite as nice as what you had. Having said that, I think my results might be influenced by the fact that I already have TCP BBR enabled on my Linux machines, which requires the FQ scheduler to be enabled as well via tc qdisc. Do you think I should turn that off and rerun with the cubic defaults?
Thanks again for all your insight and advice.
-
re: "ipfw sched show" command, but I'm not sure how to go about logging that for a 24 hour period.
cron job for (date; ipfw sched show) >> debloat_events
re: additional stats. I usually script those but there is a way to stick additional stats in the default .rc file I think.
--te=cpu_stats=machine,machine --te=qdisc_stats=login@machine,... --te=qdisc_interfaces=etc
re "nearly as nice". One of my fears is of bias without actual testing - "I made this change and the network feels better, ship it!" - where people often overestimate the networks bandwidth, or fiddle with bbr, etc and don't measure. So my other favorite flent feature is you can upload somewhere your .flent.gz files and have an expert take a look (hint hint). Flent by default stores very little metadata about your machine so I'd hoped people would be comfortable sharing its data more widely, unlike, for example packet captures. (note the -x option grabs a lot more).
I am under the impression pacing works universally on modern kernels but have not tested it - particularly not under a vm. That's another fav flent feature, go change out sch_fq for fq_codel and you can easily compare the two plot changes over that single variable.
I love flent. It's the tool that let bufferbloat.net pull so massively ahead of other researchers in the field.
-
It's probable your graph looks like pg 36 'cause that box is on the east coast? we have flent servers all over the world - flent-fremont.bufferbloat.net on the west coast, one in germany, denmark, and london, several on the east coast, one in singapore....
-
@dtaht said in Playing with fq_codel in 2.4:
It's probable your graph looks like pg 36 'cause that box is on the east coast? we have flent servers all over the world - flent-fremont.bufferbloat.net on the west coast, one in germany, denmark, and london, several on the east coast, one in singapore....
Thanks @dtaht - I think that box (netperf.bufferbloat.net) might be located near Atlanta judging by a traceroute to it. Is the list of all Flent servers available publicly somewhere? Or are those testing servers only available to the contributors of the project?
Thanks again.
-
we probably should publish it because it doesn't vary much any more. We used to have 15, now, it's 7? 6? So far we haven't any major scaling issues.
Unlike speedtest.net we don't have a revenue model, and the hope was we'd see folk setting up their own internal servers for testing. It's just netperf and irtt.
-
@dtaht
I really respect the work that was done by you and people fighting bufferbloat over the world.
As for me I use AQM everywhere it possible, to eliminate bufferbloat from my networks. The one problem is still to be investigated is those networks where we can't to detect the bottleneck size aka bandwidth limit varies a lot during day usage. I know there is a lot of such ISP networks in Japan, for example.
Yes we can set bandwidth limit to the minimum one, but it not smart enough, it would be good to detect the current bandwidth automatically and adjust everything. I do know that there are some software and router scripts samples on the net that do it in various ways, but it still need to be studied and developed in many systems, also pfSense. -
@w0w - and you. keep fighting the bloat!
My sadness is that I'd hoped we'd see all the core bufferbloat-fighting tools in DSL/fiber/cablemodems by now. BQL + FQ-codel work well on varying line rates, so long as your underlying driver is "tight". We also showed how to do up wifi right. Also thought we'd see ways (snmp? screen scraping?) of getting more core stats out of more devices, so as to be able to handle sag better.
with sch_cake we made modifying the bandwidth shaping nondestructive and really easy (can pfsense's shaper be changed in place?) - it's essentially
while getlinestatssomehow()
do
tc change dev yourdevice newbandwidth
doneSo far the only deployment of that feature is in the evenroute, but I'm pretty sure at least a few devices can be screenscraped.
As for detecting issues further upstream, there are ways appearing, but they need to be sampling streams to work (see various tools of kathie's: https://github.com/pollere )
-
Hi @dtaht,
I have a basic question regarding bufferbloat that I never quite understood. I can understand how there can be bufferbloat on the uplink of a WAN connection (e.g. 1Gbit LAN interfaces sending into a e.g. 10Mbit cable modem upload). However, what causes bufferbloat on the download since generally the interface the data is being sent into is larger than the download speed on the WAN interface (e.g. a 250Mbit cable modem download speed sent into a 1Gbit LAN interface)?
Thanks in advance for the insight and explanation, I really appreciate it.
-
The buffering in that case builds at the CMTS (on the other side of the cablemodem). CMTS's tend to have outrageous amounts of FIFO buffering (680ms on my 120Mbit comcast connection), so, if you set your inbound shaper to less than their setting, you shift the bottleneck to your device and can control it better. It's not always effective (you can end up in a pathological situation where the CMTS is buffering madly as fast as you are trying to regain control of the link), but setting up an inbound shaper to 85% or so of your provisioned rate generally works, and you end up with zero inbound buffering for sparse packets, and 5-20ms for bigger flows, locally.
Does that work for you? (It's still an open question for me as to how netgate does inbound shaping).
It's horribly compute intensive to do it this way, but since we've been after the cablecos for 7+ years now to fix their CMTSes with no progress, shaping inbound is a necessity. In my networks, I drop 30x more packets inbound than outbound but my links stay usable for tons of users, web PLTs are good, voip and videoconferencing "just work", netflix automagically finds the right rate... etc.
That work for you? The buffering comes from bad shapers on the far side of the link. It's not just CMTSes that are awful. DSL is often horrific. I'm now seeing some 1G GPON networks with several seconds of downlink buffering. I guess they didn't get the memo.
-
I actually kind of wish I hadn't stopped work on "bobbie", a better policer. fq_codel is far too gentle and has the wrong goal for inbound shaping. Yes - it works better than anything we've ever tried, but a better policer would have zero delay for all packets at a similar cost in bandwidth and far less cpu. I think. Haven't got around to it. (basically you substitute achieving a rate in a codel like algorithm instead of a target delay). Tis research for someone else to do, I'm pretty broke after helping get sch_cake out the door.
-
Over here ( https://github.com/dtaht/fq_codel_fast ) I'm trying to speed up fq_codel a bit, and add an integrated multi-cpu inbound shaper.
-
@dtaht - thanks for the response - that makes a lot of sense.
Event though I used a cable modem in the example in my previous post, the question was really about GPON. I suppose what ends up happening is that there are buffers at at the GPON card (hardware) for both upload and download. If there is too much data being pushed into the link from upstream servers (i.e. a bunch of people downloading), the buffers start to fill up and packet delay (bufferbloat) occurs for downstream users. Since GPON bandwidth is generally shared among several users, I suppose the severity can vary depending on the amount of users, their usage patterns, and level of congestion. Furthermore, I suppose it's likely easier to experience bufferbloat on the uplink direction since GPON is asymmetric (2.4Gbit down / 1.2Gbit up).
In my personal experience with a gigabit GPON link I have been fortunate: I can set inbound/outbound shaping at >95% of max bandwidth and still not experience any significant delay/bloat.