Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    CoDel - How to use

    Scheduled Pinned Locked Moved Traffic Shaping
    206 Posts 30 Posters 114.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • D
      doktornotor Banned
      last edited by

      Was never fixed AFAICT? https://redmine.pfsense.org/issues/4692

      1 Reply Last reply Reply Quote 0
      • K
        kieranc
        last edited by

        @doktornotor:

        Was never fixed AFAICT? https://redmine.pfsense.org/issues/4692

        Indeed, has anyone tried the patch? I don't really want to have to build pfsense in order to test it, that's a lot of repos to clone…

        edit: PR has been merged, new values should be applied in 2.2.4 or anything built from here on...

        1 Reply Last reply Reply Quote 0
        • H
          Harvy66
          last edited by

          I wish  this could be set via config instead of compile time. One problem at a time though, ehh?

          50/5 seems to be working well for me right now. I guess I'll see how my bufferbloat is affected once this change finally makes it. I'm getting 0ms of bufferbloat and full throughput already. According to DSLReports, my bloat can spike, but rarely. More of an issue when doing 32 upload streams.

          1 Reply Last reply Reply Quote 0
          • N
            Nullity
            last edited by

            @doktornotor:

            Was never fixed AFAICT? https://redmine.pfsense.org/issues/4692

            https://github.com/pfsense/pfsense-tools/commit/3108a902bd816036a3abffd3ec669767140891a7

            I dunno. I am unsure of many things. :(

            I probably should have updated the redmine submission. The redmine patch was a initial code to show what I had found, hooefulky to help a dev pinpoint the problem.

            The github patches were the best I could do, but I should probably stop trying to patch pfSense considering that I cannot build pfSense to test my code. :(

            Please correct any obvious misinformation in my posts.
            -Not a professional; an arrogant ignoramous.

            1 Reply Last reply Reply Quote 0
            • K
              kieranc
              last edited by

              So this is from the latest nightly:

              [2.2.4-DEVELOPMENT][admin@pfSense.localdomain]/root: pfctl -vs queue
              altq on em0 codel( target 50 interval 100) bandwidth 600Kb tbrsize 1500 
              
              

              Interval successfully changed, now we just have to figure out where the target of 50 is coming from….

              Edit: I just set the 'queue limit' to 25 in the GUI and my target is now 25.... Victory?

              Edit2: From 2.2.4 19/07/2015 nightly, with queue limit set to 5:

              [2.2.4-DEVELOPMENT][admin@pfSense.localdomain]/root: pfctl -vs queue
              altq on em1 codel( target 5 interval 100) bandwidth 6Mb tbrsize 6000 
                [ pkts:         85  bytes:       9938  dropped pkts:      0 bytes:      0 ]
                [ qlength:   0/ 50 ]
              
              

              So it wasn't anything I did yesterday that fixed it, but it does seem to be fixed/workable in 2.2.4

              1 Reply Last reply Reply Quote 0
              • N
                Nullity
                last edited by

                If qlimit is 0, it defaults to 50, and codel gets the (initial?) target value from qlimit.

                Please correct any obvious misinformation in my posts.
                -Not a professional; an arrogant ignoramous.

                1 Reply Last reply Reply Quote 0
                • K
                  kieranc
                  last edited by

                  @Nullity:

                  If qlimit is 0, it defaults to 50, and codel gets the (initial?) target value from qlimit.

                  is qlimit the queue length, or something else entirely?

                  1 Reply Last reply Reply Quote 0
                  • N
                    Nullity
                    last edited by

                    @kieranc:

                    @Nullity:

                    If qlimit is 0, it defaults to 50, and codel gets the (initial?) target value from qlimit.

                    is qlimit the queue length, or something else entirely?

                    qlimit is the queue length which becomes useless when codel is axtive, since codel dynamically controls queue length (AQM).

                    Please correct any obvious misinformation in my posts.
                    -Not a professional; an arrogant ignoramous.

                    1 Reply Last reply Reply Quote 0
                    • K
                      kieranc
                      last edited by

                      @Nullity:

                      @kieranc:

                      @Nullity:

                      If qlimit is 0, it defaults to 50, and codel gets the (initial?) target value from qlimit.

                      is qlimit the queue length, or something else entirely?

                      qlimit is the queue length which becomes useless when codel is axtive, since codel dynamically controls queue length (AQM).

                      So when using codel the 'queue limit' setting seems to change the target instead… handy, but not very obvious..
                      Thanks!

                      1 Reply Last reply Reply Quote 0
                      • N
                        Nullity
                        last edited by

                        Yeah, it is pretty confusing but I'll take CoDel however I can get it. :)
                        Ermal ported himself, iirc. Ahead of the curve, that guy! :)

                        I still dunno how to view or set codel's parameters when it is a sub-discipline though. Default or gtfo, I suppose…

                        Please correct any obvious misinformation in my posts.
                        -Not a professional; an arrogant ignoramous.

                        1 Reply Last reply Reply Quote 0
                        • H
                          Harvy66
                          last edited by

                          The whole target qlimit thing applies to CoDel for both the scheulder and the child discipline?

                          Do you know if the interval changes? The interval is supposed to be 20x the target.

                          1 Reply Last reply Reply Quote 0
                          • K
                            kieranc
                            last edited by

                            @Nullity:

                            Yeah, it is pretty confusing but I'll take CoDel however I can get it. :)
                            Ermal ported himself, iirc. Ahead of the curve, that guy! :)

                            I still dunno how to view or set codel's parameters when it is a sub-discipline though. Default or gtfo, I suppose…

                            I've just had a tinker and I can't find anything, but that certainly doesn't mean it's not there.
                            I've rarely used BSD, is there some /proc type interface where the information comes from that can be queried directly?

                            1 Reply Last reply Reply Quote 0
                            • N
                              Nullity
                              last edited by

                              @Harvy66:

                              The whole target qlimit thing applies to CoDel for both the scheulder and the child discipline?

                              Do you know if the interval changes? The interval is supposed to be 20x the target.

                              iirc, the sub-discipline setup is purely configured by hard-coded defaults and has no user configurable/viewable params that I am aware of. Hopefully, there is a simple way for a user to view/set the params in that situation. ermal? ;)

                              interval is the only value required by codel, so I do not think it changes. Technically, the target should be set based on the interval value, not vice versa.
                              afaik, current codel implementations do not automagically set interval to live RTT.

                              The CoDel building blocks are able to adapt to different or time-
                                varying link rates, to be easily used with multiple queues, to have
                                excellent utilization with low delay and to have a simple and
                                efficient implementation.  The only setting CoDel requires is its
                                interval value, and as 100ms satisfies that definition for normal
                                internet usage, CoDel can be parameter-free for consumer use.

                              See: https://tools.ietf.org/id/draft-nichols-tsvwg-codel-02.txt

                              I have tried to run a thought-experiment concerning how a 5ms interval should negatively affect codel's performance, but I cannot fully comprehend it. I need to setup a bufferbloat lab…

                              Please correct any obvious misinformation in my posts.
                              -Not a professional; an arrogant ignoramous.

                              1 Reply Last reply Reply Quote 0
                              • N
                                Nullity
                                last edited by

                                @kieranc:

                                @Nullity:

                                Yeah, it is pretty confusing but I'll take CoDel however I can get it. :)
                                Ermal ported himself, iirc. Ahead of the curve, that guy! :)

                                I still dunno how to view or set codel's parameters when it is a sub-discipline though. Default or gtfo, I suppose…

                                I've just had a tinker and I can't find anything, but that certainly doesn't mean it's not there.
                                I've rarely used BSD, is there some /proc type interface where the information comes from that can be queried directly?

                                iirc, the values could be gotten through some dev/proc interface, but it required an ioctl system call and could not be done via shell commands.

                                Though, I was confused then and now I've forgotten stuff, so I might be sense-making not-so-much.

                                Please correct any obvious misinformation in my posts.
                                -Not a professional; an arrogant ignoramous.

                                1 Reply Last reply Reply Quote 0
                                • K
                                  kieranc
                                  last edited by

                                  Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
                                  With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
                                  With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat test

                                  1 Reply Last reply Reply Quote 0
                                  • H
                                    Harvy66
                                    last edited by

                                    That might explain why the CoDel people were saying they typically saw bufferbloat is low as 30ms, but I was seeing 0ms. PFSense may be more aggressive with the 5ms interval.

                                    The interval is how often a single packet will be dropped until the packet's time in queue is below the target. If the target is 100ms with a 5ms interval, once you get 100ms of packets, CoDel will start dropping packets every 5ms and slowly increase the rate. It's not exactly how I say it, but close. They have some specific math that makes things everything not as simple as described, but very similar.

                                    the interval is supposed to be set your your "normal" RTT, and the target should be 1/20th that value. Most services I hit have sub 30ms pings. My interval should be say 45ms and my target 2.25ms.

                                    If the interval is too high, CoDel will too passive and have increasing bufferbloat, but if it's too low, it will be too aggressive and reduce throughput.

                                    Maybe this is why PFSense's CoDel gives bad packetloss and throughput on slow connections. If the interval is 5ms, many packets will be dropped in a row.

                                    1 Reply Last reply Reply Quote 0
                                    • N
                                      Nullity
                                      last edited by

                                      @kieranc:

                                      Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
                                      With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
                                      With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat test

                                      I think you may have another problem/misconfiguration. You should be seeing MUUUUCH better than 200ms. My ADSL connection goes from 600ms without any traffic-shaping, to 50ms with CoDel on upstream during a fully-saturating, single-stream upload test. My idle ping to first hop is ~10ms.

                                      but lol…. I have been laughing that the fixed parameter values would actually cause a performance decrease...

                                      Please correct any obvious misinformation in my posts.
                                      -Not a professional; an arrogant ignoramous.

                                      1 Reply Last reply Reply Quote 0
                                      • K
                                        kieranc
                                        last edited by

                                        @Nullity:

                                        @kieranc:

                                        Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
                                        With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
                                        With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat test

                                        I think you may have another problem/misconfiguration. You should be seeing MUUUUCH better than 200ms. My ADSL connection goes from 600ms without any traffic-shaping, to 50ms with CoDel on upstream during a fully-saturating, single-stream upload test. My idle ping to first hop is ~10ms.

                                        but lol…. I have been laughing that the fixed parameter values would actually cause a performance decrease...

                                        You're absolutely right, my problem is my ISP and their crappy excuse for a router, which I can't easily replace because it also handles the phones.
                                        My connection will easily hit 2000ms+ if someone is uploading, <200ms is a massive improvement.

                                        I'm also laughing a little at the results, based on your previous tests it's not a huge surprise but an explaination would be nice!

                                        1 Reply Last reply Reply Quote 0
                                        • N
                                          Nullity
                                          last edited by

                                          @kieranc:

                                          @Nullity:

                                          @kieranc:

                                          Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
                                          With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
                                          With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat test

                                          I think you may have another problem/misconfiguration. You should be seeing MUUUUCH better than 200ms. My ADSL connection goes from 600ms without any traffic-shaping, to 50ms with CoDel on upstream during a fully-saturating, single-stream upload test. My idle ping to first hop is ~10ms.

                                          but lol…. I have been laughing that the fixed parameter values would actually cause a performance decrease...

                                          You're absolutely right, my problem is my ISP and their crappy excuse for a router, which I can't easily replace because it also handles the phones.
                                          My connection will easily hit 2000ms+ if someone is uploading, <200ms is a massive improvement.

                                          I'm also laughing a little at the results, based on your previous tests it's not a huge surprise but an explaination would be nice!

                                          You might test enabling net.inet.tcp.inflight.enable=1 in the System->Advanced->System Tunables tab.

                                          TCP bandwidth delay product limiting can be enabled by setting the net.inet.tcp.inflight.enable sysctl(8) variable to 1. This instructs the system to attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain optimum throughput.

                                          This feature is useful when serving data over modems, Gigabit Ethernet, high speed WAN links, or any other link with a high bandwidth delay product, especially when also using window scaling or when a large send window has been configured. When enabling this option, also set net.inet.tcp.inflight.debug to 0 to disable debugging. For production use, setting net.inet.tcp.inflight.min to at least 6144 may be beneficial. Setting high minimums may effectively disable bandwidth limiting, depending on the link. The limiting feature reduces the amount of data built up in intermediate route and switch packet queues and reduces the amount of data built up in the local host's interface queue. With fewer queued packets, interactive connections, especially over slow modems, will operate with lower Round Trip Times. This feature only effects server side data transmission such as uploading. It has no effect on data reception or downloading.

                                          Adjusting net.inet.tcp.inflight.stab is not recommended. This parameter defaults to 20, representing 2 maximal packets added to the bandwidth delay product window calculation. The additional window is required to stabilize the algorithm and improve responsiveness to changing conditions, but it can also result in higher ping(8) times over slow links, though still much lower than without the inflight algorithm. In such cases, try reducing this parameter to 15, 10, or 5 and reducing net.inet.tcp.inflight.min to a value such as 3500 to get the desired effect. Reducing these parameters should be done as a last resort only.

                                          https://www.freebsd.org/doc/handbook/configtuning-kernel-limits.html

                                          Seems like exactly the type of thing we would be interested in.

                                          Please correct any obvious misinformation in my posts.
                                          -Not a professional; an arrogant ignoramous.

                                          1 Reply Last reply Reply Quote 0
                                          • K
                                            kieranc
                                            last edited by

                                            @Nullity:

                                            @kieranc:

                                            @Nullity:

                                            @kieranc:

                                            Well, this is fun. It seems to actually perform worse with the 'correct' values in place.
                                            With 50/5 I was seeing mostly <200ms response time with upstream saturated and a 'B' on dslreports bufferbloat test
                                            With 5/100 I'm seeing mostly <300ms response time, with more between 200 and 300ms than before, and a 'C' on dslreports bufferbloat test

                                            I think you may have another problem/misconfiguration. You should be seeing MUUUUCH better than 200ms. My ADSL connection goes from 600ms without any traffic-shaping, to 50ms with CoDel on upstream during a fully-saturating, single-stream upload test. My idle ping to first hop is ~10ms.

                                            but lol…. I have been laughing that the fixed parameter values would actually cause a performance decrease...

                                            You're absolutely right, my problem is my ISP and their crappy excuse for a router, which I can't easily replace because it also handles the phones.
                                            My connection will easily hit 2000ms+ if someone is uploading, <200ms is a massive improvement.

                                            I'm also laughing a little at the results, based on your previous tests it's not a huge surprise but an explaination would be nice!

                                            You might test enabling net.inet.tcp.inflight.enable=1 in the System->Advanced->System Tunables tab.

                                            TCP bandwidth delay product limiting can be enabled by setting the net.inet.tcp.inflight.enable sysctl(8) variable to 1. This instructs the system to attempt to calculate the bandwidth delay product for each connection and limit the amount of data queued to the network to just the amount required to maintain optimum throughput.

                                            This feature is useful when serving data over modems, Gigabit Ethernet, high speed WAN links, or any other link with a high bandwidth delay product, especially when also using window scaling or when a large send window has been configured. When enabling this option, also set net.inet.tcp.inflight.debug to 0 to disable debugging. For production use, setting net.inet.tcp.inflight.min to at least 6144 may be beneficial. Setting high minimums may effectively disable bandwidth limiting, depending on the link. The limiting feature reduces the amount of data built up in intermediate route and switch packet queues and reduces the amount of data built up in the local host's interface queue. With fewer queued packets, interactive connections, especially over slow modems, will operate with lower Round Trip Times. This feature only effects server side data transmission such as uploading. It has no effect on data reception or downloading.

                                            Adjusting net.inet.tcp.inflight.stab is not recommended. This parameter defaults to 20, representing 2 maximal packets added to the bandwidth delay product window calculation. The additional window is required to stabilize the algorithm and improve responsiveness to changing conditions, but it can also result in higher ping(8) times over slow links, though still much lower than without the inflight algorithm. In such cases, try reducing this parameter to 15, 10, or 5 and reducing net.inet.tcp.inflight.min to a value such as 3500 to get the desired effect. Reducing these parameters should be done as a last resort only.

                                            https://www.freebsd.org/doc/handbook/configtuning-kernel-limits.html

                                            Seems like exactly the type of thing we would be interested in.

                                            Just for fun, I enabled it and disabled the traffic shaper, during the upload portion of a dslreports test, my ping hit 2700ms :)
                                            With inflight and codel enabled it seems to be behaving fine, possibly slightly better than without but i'll have to do more testing.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.