CoDel - How to use
-
I wish this could be set via config instead of compile time. One problem at a time though, ehh?
50/5 seems to be working well for me right now. I guess I'll see how my bufferbloat is affected once this change finally makes it. I'm getting 0ms of bufferbloat and full throughput already. According to DSLReports, my bloat can spike, but rarely. More of an issue when doing 32 upload streams.
-
Was never fixed AFAICT? https://redmine.pfsense.org/issues/4692
https://github.com/pfsense/pfsense-tools/commit/3108a902bd816036a3abffd3ec669767140891a7
I dunno. I am unsure of many things. :(
I probably should have updated the redmine submission. The redmine patch was a initial code to show what I had found, hooefulky to help a dev pinpoint the problem.
The github patches were the best I could do, but I should probably stop trying to patch pfSense considering that I cannot build pfSense to test my code. :(
-
So this is from the latest nightly:
[2.2.4-DEVELOPMENT][admin@pfSense.localdomain]/root: pfctl -vs queue altq on em0 codel( target 50 interval 100) bandwidth 600Kb tbrsize 1500
Interval successfully changed, now we just have to figure out where the target of 50 is coming from….
Edit: I just set the 'queue limit' to 25 in the GUI and my target is now 25.... Victory?
Edit2: From 2.2.4 19/07/2015 nightly, with queue limit set to 5:
[2.2.4-DEVELOPMENT][admin@pfSense.localdomain]/root: pfctl -vs queue altq on em1 codel( target 5 interval 100) bandwidth 6Mb tbrsize 6000 [ pkts: 85 bytes: 9938 dropped pkts: 0 bytes: 0 ] [ qlength: 0/ 50 ]
So it wasn't anything I did yesterday that fixed it, but it does seem to be fixed/workable in 2.2.4
-
If qlimit is 0, it defaults to 50, and codel gets the (initial?) target value from qlimit.
-
If qlimit is 0, it defaults to 50, and codel gets the (initial?) target value from qlimit.
is qlimit the queue length, or something else entirely?
-
-
If qlimit is 0, it defaults to 50, and codel gets the (initial?) target value from qlimit.
is qlimit the queue length, or something else entirely?
qlimit is the queue length which becomes useless when codel is axtive, since codel dynamically controls queue length (AQM).
So when using codel the 'queue limit' setting seems to change the target instead… handy, but not very obvious..
Thanks! -
Yeah, it is pretty confusing but I'll take CoDel however I can get it. :)
Ermal ported himself, iirc. Ahead of the curve, that guy! :)I still dunno how to view or set codel's parameters when it is a sub-discipline though. Default or gtfo, I suppose…
-
The whole target qlimit thing applies to CoDel for both the scheulder and the child discipline?
Do you know if the interval changes? The interval is supposed to be 20x the target.
-
Yeah, it is pretty confusing but I'll take CoDel however I can get it. :)
Ermal ported himself, iirc. Ahead of the curve, that guy! :)I still dunno how to view or set codel's parameters when it is a sub-discipline though. Default or gtfo, I suppose…
I've just had a tinker and I can't find anything, but that certainly doesn't mean it's not there.
I've rarely used BSD, is there some /proc type interface where the information comes from that can be queried directly? -
The whole target qlimit thing applies to CoDel for both the scheulder and the child discipline?
Do you know if the interval changes? The interval is supposed to be 20x the target.
iirc, the sub-discipline setup is purely configured by hard-coded defaults and has no user configurable/viewable params that I am aware of. Hopefully, there is a simple way for a user to view/set the params in that situation. ermal? ;)
interval is the only value required by codel, so I do not think it changes. Technically, the target should be set based on the interval value, not vice versa.
afaik, current codel implementations do not automagically set interval to live RTT.The CoDel building blocks are able to adapt to different or time-
varying link rates, to be easily used with multiple queues, to have
excellent utilization with low delay and to have a simple and
efficient implementation. The only setting CoDel requires is its
interval value, and as 100ms satisfies that definition for normal
internet usage, CoDel can be parameter-free for consumer use.See: https://tools.ietf.org/id/draft-nichols-tsvwg-codel-02.txt
I have tried to run a thought-experiment concerning how a 5ms interval should negatively affect codel's performance, but I cannot fully comprehend it. I need to setup a bufferbloat lab…
-
Yeah, it is pretty confusing but I'll take CoDel however I can get it. :)
Ermal ported himself, iirc. Ahead of the curve, that guy! :)I still dunno how to view or set codel's parameters when it is a sub-discipline though. Default or gtfo, I suppose…
I've just had a tinker and I can't find anything, but that certainly doesn't mean it's not there.
I've rarely used BSD, is there some /proc type interface where the information comes from that can be queried directly?iirc, the values could be gotten through some dev/proc interface, but it required an ioctl system call and could not be done via shell commands.
Though, I was confused then and now I've forgotten stuff, so I might be sense-making not-so-much.
-
Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat test -
That might explain why the CoDel people were saying they typically saw bufferbloat is low as 30ms, but I was seeing 0ms. PFSense may be more aggressive with the 5ms interval.
The interval is how often a single packet will be dropped until the packet's time in queue is below the target. If the target is 100ms with a 5ms interval, once you get 100ms of packets, CoDel will start dropping packets every 5ms and slowly increase the rate. It's not exactly how I say it, but close. They have some specific math that makes things everything not as simple as described, but very similar.
the interval is supposed to be set your your "normal" RTT, and the target should be 1/20th that value. Most services I hit have sub 30ms pings. My interval should be say 45ms and my target 2.25ms.
If the interval is too high, CoDel will too passive and have increasing bufferbloat, but if it's too low, it will be too aggressive and reduce throughput.
Maybe this is why PFSense's CoDel gives bad packetloss and throughput on slow connections. If the interval is 5ms, many packets will be dropped in a row.
-
Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat testI think you may have another problem/misconfiguration. You should be seeing MUUUUCH better than 200ms. My ADSL connection goes from 600ms without any traffic-shaping, to 50ms with CoDel on upstream during a fully-saturating, single-stream upload test. My idle ping to first hop is ~10ms.
but lol…. I have been laughing that the fixed parameter values would actually cause a performance decrease...
-
Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat testI think you may have another problem/misconfiguration. You should be seeing MUUUUCH better than 200ms. My ADSL connection goes from 600ms without any traffic-shaping, to 50ms with CoDel on upstream during a fully-saturating, single-stream upload test. My idle ping to first hop is ~10ms.
but lol…. I have been laughing that the fixed parameter values would actually cause a performance decrease...
You're absolutely right, my problem is my ISP and their crappy excuse for a router, which I can't easily replace because it also handles the phones.
My connection will easily hit 2000ms+ if someone is uploading, <200ms is a massive improvement.I'm also laughing a little at the results, based on your previous tests it's not a huge surprise but an explaination would be nice!
-
Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat testI think you may have another problem/misconfiguration. You should be seeing MUUUUCH better than 200ms. My ADSL connection goes from 600ms without any traffic-shaping, to 50ms with CoDel on upstream during a fully-saturating, single-stream upload test. My idle ping to first hop is ~10ms.
but lol…. I have been laughing that the fixed parameter values would actually cause a performance decrease...
You're absolutely right, my problem is my ISP and their crappy excuse for a router, which I can't easily replace because it also handles the phones.
My connection will easily hit 2000ms+ if someone is uploading, <200ms is a massive improvement.I'm also laughing a little at the results, based on your previous tests it's not a huge surprise but an explaination would be nice!
You might test enabling net.inet.tcp.inflight.enable=1 in the System->Advanced->System Tunables tab.
TCP bandwidth delay product limiting can be enabled by setting the net.inet.tcp.inflight.enable sysctl(8) variable to 1. This instructs the system to attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain optimum throughput.
This feature is useful when serving data over modems, Gigabit Ethernet, high speed WAN links, or any other link with a high bandwidth delay product, especially when also using window scaling or when a large send window has been configured. When enabling this option, also set net.inet.tcp.inflight.debug to 0 to disable debugging. For production use, setting net.inet.tcp.inflight.min to at least 6144 may be beneficial. Setting high minimums may effectively disable bandwidth limiting, depending on the link. The limiting feature reduces the amount of data built up in intermediate route and switch packet queues and reduces the amount of data built up in the local host's interface queue. With fewer queued packets, interactive connections, especially over slow modems, will operate with lower Round Trip Times. This feature only effects server side data transmission such as uploading. It has no effect on data reception or downloading.
Adjusting net.inet.tcp.inflight.stab is not recommended. This parameter defaults to 20, representing 2 maximal packets added to the bandwidth delay product window calculation. The additional window is required to stabilize the algorithm and improve responsiveness to changing conditions, but it can also result in higher ping(8) times over slow links, though still much lower than without the inflight algorithm. In such cases, try reducing this parameter to 15, 10, or 5 and reducing net.inet.tcp.inflight.min to a value such as 3500 to get the desired effect. Reducing these parameters should be done as a last resort only.
https://www.freebsd.org/doc/handbook/configtuning-kernel-limits.html
Seems like exactly the type of thing we would be interested in.
-
Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat testI think you may have another problem/misconfiguration. You should be seeing MUUUUCH better than 200ms. My ADSL connection goes from 600ms without any traffic-shaping, to 50ms with CoDel on upstream during a fully-saturating, single-stream upload test. My idle ping to first hop is ~10ms.
but lol…. I have been laughing that the fixed parameter values would actually cause a performance decrease...
You're absolutely right, my problem is my ISP and their crappy excuse for a router, which I can't easily replace because it also handles the phones.
My connection will easily hit 2000ms+ if someone is uploading, <200ms is a massive improvement.I'm also laughing a little at the results, based on your previous tests it's not a huge surprise but an explaination would be nice!
You might test enabling net.inet.tcp.inflight.enable=1 in the System->Advanced->System Tunables tab.
TCP bandwidth delay product limiting can be enabled by setting the net.inet.tcp.inflight.enable sysctl(8) variable to 1. This instructs the system to attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain optimum throughput.
This feature is useful when serving data over modems, Gigabit Ethernet, high speed WAN links, or any other link with a high bandwidth delay product, especially when also using window scaling or when a large send window has been configured. When enabling this option, also set net.inet.tcp.inflight.debug to 0 to disable debugging. For production use, setting net.inet.tcp.inflight.min to at least 6144 may be beneficial. Setting high minimums may effectively disable bandwidth limiting, depending on the link. The limiting feature reduces the amount of data built up in intermediate route and switch packet queues and reduces the amount of data built up in the local host's interface queue. With fewer queued packets, interactive connections, especially over slow modems, will operate with lower Round Trip Times. This feature only effects server side data transmission such as uploading. It has no effect on data reception or downloading.
Adjusting net.inet.tcp.inflight.stab is not recommended. This parameter defaults to 20, representing 2 maximal packets added to the bandwidth delay product window calculation. The additional window is required to stabilize the algorithm and improve responsiveness to changing conditions, but it can also result in higher ping(8) times over slow links, though still much lower than without the inflight algorithm. In such cases, try reducing this parameter to 15, 10, or 5 and reducing net.inet.tcp.inflight.min to a value such as 3500 to get the desired effect. Reducing these parameters should be done as a last resort only.
https://www.freebsd.org/doc/handbook/configtuning-kernel-limits.html
Seems like exactly the type of thing we would be interested in.
Just for fun, I enabled it and disabled the traffic shaper, during the upload portion of a dslreports test, my ping hit 2700ms :)
With inflight and codel enabled it seems to be behaving fine, possibly slightly better than without but i'll have to do more testing. -
Given that the 2.2.4 'correct' settings seem to give worse results than the 'incorrect' 2.2.3 ones (for me at least), it seems that we need the ability to tune both interval and target in order to make codel useful for everyone.
I'm guessing it would be complicated to add an extra field to the traffic shaper setup page, but since the queue limit field is currently being reused to set the target, could we add some logic to use it to set both target and interval? if the field contains a single integer, use it as a target, if it contains something else (t5i100? 5:100?) then use it as target and interval?
It's a bit beyond my skill level but it seems like it should be possible in theory, or is there a better way to do it?edit: can we just set the interval and derive the target from that, if it's easier?
-
Given that the 2.2.4 'correct' settings seem to give worse results than the 'incorrect' 2.2.3 ones (for me at least), it seems that we need the ability to tune both interval and target in order to make codel useful for everyone.
I'm guessing it would be complicated to add an extra field to the traffic shaper setup page, but since the queue limit field is currently being reused to set the target, could we add some logic to use it to set both target and interval? if the field contains a single integer, use it as a target, if it contains something else (t5i100? 5:100?) then use it as target and interval?
It's a bit beyond my skill level but it seems like it should be possible in theory, or is there a better way to do it?edit: can we just set the interval and derive the target from that, if it's easier?
Those who created the codel algorithm are the one's who dictate that, and I presume they chose wisely. Target is dynamic anyway, I think.
Though, if you choose CoDel as the primary/parent scheduler, then you can choose whatever interval/target you want, via command-line, at least for temporary testing purposes.
Our problem is that we cannot customize CoDel's parameters when it is a sub-discipline aka "Codel Active Queue" checkbox.Whether to expose the CoDel params in the GUI or not… if it is anything like the HFSC params, people will needlessly tweak them with unforeseen consequences simply because they are there. I dunno... maybe we can use the System Tunables tab and add custom params and values that way?