Playing with fq_codel in 2.4
-
I can confirm that your understanding matches my understanding. What I can't confirm is that my understanding is correct :D It's obviously a pretty confusing topic. Because of course you can have more than one child queue per pipe as well, and any child queue may be dynamic or not. In my setup right now, for example, I have two pipes (upload and download) and two child queues for each pipe.
Consider only my download pipe. It has two child queues: one with a weight of 30 and one with a weight of 70 (for low and high priority traffic respectively). Both child queues are dynamic, with /32 masks on the destination address. Based on my understanding, this means that every host on my LAN should get its own queue.
Now, if that's true, suppose I have 5 hosts that are downloading and directed to my 30-weight "low priority" download queue based on firewall rules and 1 host that is downloading and directed to my 70-weight "high priority" download queue. Each of the 5 "low priority" hosts will get their own queue, but will each of those queues have a weight of 30? My expectation would be no; instead, the weight of the child queue should be equally distributed among however many dynamic queues are spawned from it. So in this case, there would be 5 dynamic queues each with a weight of 6 spawned from the "low priority" child queue of weight 30 and 1 dynamic queue of weight 70 spawned from the "high priority" child queue of weight 70.
Maybe that's not exactly how it works, but my hope is that it's at least conceptually accurate. Because it wouldn't make sense if dynamic queues each had the same weight as the child queue from which they were spawned. In the example above, I'd end up with 5 queues of weight 30 and 1 queue of weight 70. So collectively, my 5 low priority hosts would be getting a share of (150/220) or roughly 68% of the parent pipe, and my 1 high priority host would be getting roughly 32%. That situation would turn on its head my original intention of reserving 30% of a saturated pipe for low priority hosts and 70% for high priority hosts.
I don't know if these ruminations are helpful or simply add to the confusion . . . but at least it's fun trying to think it through ;) Still hoping for a true dummynet prodigy to poke his or her head into this thread.
-
I’ve noticed all of the fq_codel config examples have inbound and outbound queues.
It’s been my experience that shaping or prioritizing inbound WAN traffic, on a home/office router, is ineffective and degrades network performance.
Quick testing on DSLreports, I get much better results when I disable the inbound fw rules and only shape the outbound WAN traffic:
P4 2.4Ghz 2 core w/HT
Spectrum 30down 15upNo shaping - normal throughout & C+ buffer bloat
IN & OUT shaping - reduced download throughput (50%) & A+ buffer bloat
OUT only shaping - normal or slightly improved overall throughput & A+ buffer bloatHas anyone seen download performance increased with the inbound shaper activated?
Btw, I’m new to the forum, thanks to everyone who’s posted, great discussion.
-
I’ve noticed all of the fq_codel config examples have inbound and outbound queues.
It’s been my experience that shaping or prioritizing inbound WAN traffic, on a home/office router, is ineffective and degrades network performance.
Quick testing on DSLreports, I get much better results when I disable the inbound fw rules and only shape the outbound WAN traffic:
P4 2.4Ghz 2 core w/HT
Spectrum 30down 15upNo shaping - normal throughout & C+ buffer bloat
IN & OUT shaping - reduced download throughput (50%) & A+ buffer bloat
OUT only shaping - normal or slightly improved overall throughput & A+ buffer bloatHas anyone seen download performance increased with the inbound shaper activated?
Btw, I’m new to the forum, thanks to everyone who’s posted, great discussion.
I agree but only because of my similarly limited experience. When looking at other's experiences, download rate-limiting has it's use but it's highly dependent on who your ISP is and what hardware they use.
From a bufferbloat perspective, avoiding any buffering that is beyond your control is vital… whether these uncontrollable buffers are making a noticeable impact, well, that is very situational.
(IIRC) I saw ~20% decrease in worst-case latency on download, but I lost ~10% with my average download speed. It was not worth it to me.
-
To go along with what Nullity said, generally upload has the highest bloat and gives you the most return and you actually have full control over it. One of the reasons for higher bloat in upload is most bufferbloat is caused by fixed sized buffers that are sized for the maximum theoretical provisioned rate that the device supports. This means a 30Mb cable connection may have the buffer of a 300Mb connection. To make matters worse, is the buffer for 300Mb is also larger than optimal. the good news for download is the bottleneck may more likely be in the uplink, which should have properly sized buffers. Downloading tends not to have too bad of bloat.
Upload on the other hand tends to be much slower than download for most residential connections, making the fixed size buffers even worse of an issue. And the source of the data is on the wrong side of the bloat for uploading.
-
I think might be the exception here - I actually get (seemingly) slightly better performance when I have fq_codel applied to both upload and download on my fiber connection. That being said, the only way I have tested this to date has been through speed tests, and in particular the DSL Reports speed test to get an idea of bufferbloat. With fq_codel applied to the download side as well, my download during the test is a bit more stable, comes in slightly higher (bandwidth) and has lower average latency during the test. In fact, I have found the best performance so far for me has been by using a 940/940 limit on a gigabit FTTH connection, with a little bit more aggressive target of 3ms and interval of 60ms – the fq_codel defaults are 5ms and 100ms. This does limit the download and upload speed to about 915-920Mbit on my connection, but I'm willing to take a 3 - 3.5% hit on bandwidth to have average lower average latency and connection stability. One could argue that a on a gigabit Fiber connection all this really doesn't matter anyway since there are very few cases where the bandwidth is maxed out anyway, and that's probably true. But nonetheless I do like having these settings to ensure stability.
-
I’ve noticed all of the fq_codel config examples have inbound and outbound queues.
It’s been my experience that shaping or prioritizing inbound WAN traffic, on a home/office router, is ineffective and degrades network performance.
Quick testing on DSLreports, I get much better results when I disable the inbound fw rules and only shape the outbound WAN traffic:
P4 2.4Ghz 2 core w/HT
Spectrum 30down 15upNo shaping - normal throughout & C+ buffer bloat
IN & OUT shaping - reduced download throughput (50%) & A+ buffer bloat
OUT only shaping - normal or slightly improved overall throughput & A+ buffer bloatHas anyone seen download performance increased with the inbound shaper activated?
Btw, I’m new to the forum, thanks to everyone who’s posted, great discussion.
50% is a very large throughput decrease - do you mind sharing your settings?
-
I’ve noticed all of the fq_codel config examples have inbound and outbound queues.
It’s been my experience that shaping or prioritizing inbound WAN traffic, on a home/office router, is ineffective and degrades network performance.
Quick testing on DSLreports, I get much better results when I disable the inbound fw rules and only shape the outbound WAN traffic:
P4 2.4Ghz 2 core w/HT
Spectrum 30down 15upNo shaping - normal throughout & C+ buffer bloat
IN & OUT shaping - reduced download throughput (50%) & A+ buffer bloat
OUT only shaping - normal or slightly improved overall throughput & A+ buffer bloatHas anyone seen download performance increased with the inbound shaper activated?
Btw, I’m new to the forum, thanks to everyone who’s posted, great discussion.
Well given for it to work you have to set the inbound pipe lower than your line capacity, yes download speeds will be a bit slower to match the pipe size.
Shaping isnt about max possible throughput but maintaining fairness across different network applications and QoS.
Some applications will by their design completely swamp a line for downloading and act as a sort of ddos.
-
I think might be the exception here - I actually get (seemingly) slightly better performance when I have fq_codel applied to both upload and download on my fiber connection. That being said, the only way I have tested this to date has been through speed tests, and in particular the DSL Reports speed test to get an idea of bufferbloat. With fq_codel applied to the download side as well, my download during the test is a bit more stable, comes in slightly higher (bandwidth) and has lower average latency during the test. In fact, I have found the best performance so far for me has been by using a 940/940 limit on a gigabit FTTH connection, with a little bit more aggressive target of 3ms and interval of 60ms – the fq_codel defaults are 5ms and 100ms. This does limit the download and upload speed to about 915-920Mbit on my connection, but I'm willing to take a 3 - 3.5% hit on bandwidth to have average lower average latency and connection stability. One could argue that a on a gigabit Fiber connection all this really doesn't matter anyway since there are very few cases where the bandwidth is maxed out anyway, and that's probably true. But nonetheless I do like having these settings to ensure stability.
you not the exception :)
-
I'm running fq_codel along with HSFC classes.
It helped my bufferbloat go from A to A+ even when I put the max speed in HSFC and the limiter!
Before with just hsfc I had to run 2 or 3 mbit lower for up and down to get just A.
I also have 2 other limiters set lower, for guest clients unknown and known.In shellcmd (also on filter change)
ipfw sched 1 config pipe 1 type fq_codel target 5 noecn quantum 300 limit 1000 interval 50 && ipfw sched 2 config pipe 2 type fq_codel target 5 noecn quantum 300 limit 1000 interval 50 && ipfw sched 3 config pipe 3 type fq_codel target 5 noecn quantum 300 limit 1000 interval 50 && ipfw sched 4 config pipe 4 type fq_codel target 5 noecn quantum 300 limit 1000 interval 50 && ipfw sched 5 config pipe 5 type fq_codel target 5 noecn quantum 300 limit 1000 interval 50 && ipfw sched 6 config pipe 6 type fq_codel target 5 noecn quantum 300 limit 1000 interval 50 -
I think might be the exception here - I actually get (seemingly) slightly better performance when I have fq_codel applied to both upload and download on my fiber connection. That being said, the only way I have tested this to date has been through speed tests, and in particular the DSL Reports speed test to get an idea of bufferbloat. With fq_codel applied to the download side as well, my download during the test is a bit more stable, comes in slightly higher (bandwidth) and has lower average latency during the test. In fact, I have found the best performance so far for me has been by using a 940/940 limit on a gigabit FTTH connection, with a little bit more aggressive target of 3ms and interval of 60ms – the fq_codel defaults are 5ms and 100ms. This does limit the download and upload speed to about 915-920Mbit on my connection, but I'm willing to take a 3 - 3.5% hit on bandwidth to have average lower average latency and connection stability. One could argue that a on a gigabit Fiber connection all this really doesn't matter anyway since there are very few cases where the bandwidth is maxed out anyway, and that's probably true. But nonetheless I do like having these settings to ensure stability.
I see gigabit maxed out all of the time. Youtube, Netflix, and Hulu microburst their ~250KiB chunks at 1Gb/s. Packet sniff these TCP connections and I see back-to-back 1500 byte frames for about 2ms at a time. That's for steady state. I technically only have a 150Mb connection, but it's a 1Gb link that is policed to 150Mb. If I keep jumping around the video timelines, I can keep the video stream in a perma-buffering state, which attempts to send at full 1Gb/s. I can see 1Gb/s for a bout the first 100ms or so before the policer starts ramping up. That could represent a 100ms burst in latency if it was not for my ISP's AQM plus my HFSC shaping.
-
https://github.com/pfsense/pfsense/pull/3941
And happiness ensued…
:)
-
https://github.com/pfsense/pfsense/pull/3941
And happiness ensued…
:)
Fired up about it. Looking forward to this addition.
-
;D ;D ;D ;D
-
Great news!!!
I wonder if it's a good idea to clear the current fq_codel config before upgrading to 2.4.4 and rebuild it using the GUI after the upgrade? Otherwise, maybe things might not upgrade correctly? Does anyone have any thoughts on that?
Thanks in advance.
-
https://github.com/pfsense/pfsense/pull/3941
And happiness ensued…
:)
Fired up about it. Looking forward to this addition.
Me too! I submitted that PR after working on it this week, actually got the idea from reading this thread. I'm testing it on my own right now; I have multi-LAN and multi-WAN, and I needed something to be able to classify traffic coming from WAN A / WAN B out LAN A/B and WAN B out the same LAN paths. Dummynet works awesome for this, and I've gone from a Bufferbloat score of C to A with the patch. If you want to load the patch, you just have to make a queue under the limiter, and assign it with a floating rule on the WAN interface (out direction).
Gripes about Traffic Shaper right now that the Limiter PR seems to help out with:
-
Not enough parameter configuration for FQ_CODEL on the GUI (default FQ_CODEL target is 5000 on my install)
-
Couldn't get traffic shaper (altq) to bind to LAGG interfaces
-
Couldn't get traffic shaper (altq) to work with a multi-LAN setup for download classification
-
PIE support
For those of you interested, here's a SS: https://i.imgur.com/N36gpXF.png
If anyone has any suggestions for changes or notices any problems, I'd be happy to factor them into my fork (and therefore the PR). Full disclosure, I am by no means an expert on dummynet/ipfw. I just have a lot of free time on my hands…
-
-
Really appreciate the work Matt!
-
@matt, why is the interval so large? The interval should be roughly equal to your upper typical RTT. 100,000ms is a pretty big RTT.
-
@matt, why is the interval so large? The interval should be roughly equal to your upper typical RTT. 100,000ms is a pretty big RTT.
Not sure. I'm loading these params in from sysctl upon save, so it's set to whatever is default. I did think that was odd, also. Maybe I have the units wrong and```
targetEDIT: You're exactly right on this! I make a sysctl call to load in defaults but it seems the units used on the sysctls are definitely not milliseconds! Looks like microseconds to me. I'm fixing the PR. Also, I've got a patch for 2.4.3 as well on my repository which will get this change, too. EDIT 2: All fixed! And, here is the applicable DIFF file for the repository should anyone want to load the patch into their 2.4.3 install. I run pfSense at home, on the stable release, so I'm using this branch and copying changes to the PR (which is ahead in terms of code). Definitely don't use the PR's diff on a stable 2.4.3 install, since they've got some wonky PHP 7 stuff going on that causes some really weird behavior in the whole shaper module. This diff doesn't have those changes (main reason for two heads) https://github.com/pfsense/pfsense/compare/RELENG_2_4_3…mattund:RELENG_2_4_3.diff If they accept this PR after reviewing it -- and I can almost guarantee they'll have critique -- I'll turn my attention to Limiter Info and work on a PR for that, too. IMHO we desperately need a better status page for dummynet if FQ_CODEL/FQ_PIE are gonna be in use more.
-
won't lie I am super excited about this !!!!!!!
-
Applied this diff https://github.com/pfsense/pfsense/compare/RELENG_2_4_3…mattund:RELENG_2_4_3.diff
Adding new queue by pressing button on limiter settings page opens new page and you can add new queue save and apply it, but it does not appear in the list and I don't see it anywhere. But it can be only in my case, just because I have some April version of 2.4.4, before PHP7 preparations were made.