Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Limiter bandwidth setting causing sharp drop in bandwidth

    Scheduled Pinned Locked Moved Traffic Shaping
    15 Posts 4 Posters 3.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H
      hiryu
      last edited by

      Thanks to help on IRC, I've determined that the issue is that the default bucket and queue sizes are insufficient and needed to be tuned. I'll probably respond again to my own thread with more details as I better tune this.

      1 Reply Last reply Reply Quote 0
      • D
        d4ab9f
        last edited by

        I have the same issue with 500 downstream connection. If I set it to anything close to 500 the real speed goes down to roughly 250 with reasonably good bufferbloat (A) on dslreports.
        If I set it around 590-610 it actually goes to 350-400ish. But bufferloat gets bad (D).
        And without limiter speed goes all the way up to 500 with horrible (F) bufferbloat.

        Please share your solution if you find it.
        Is there a rule how to choose bucket and queue size depending on the speed of downstreem?

        1 Reply Last reply Reply Quote 0
        • H
          Harvy66
          last edited by

          Just enable Codel Active queue for your queue discipline, it's sized by time instead of number of packets. The default is 50 packets. Just think about it, 500Mb/s can fill a 50 packet queue with 1500 byte packets in 1.2ms. Probably not enough time for the thread scheduler to react.

          1 Reply Last reply Reply Quote 0
          • N
            Nullity
            last edited by

            @Harvy66:

            Just enable Codel Active queue for your queue discipline, it's sized by time instead of number of packets. The default is 50 packets. Just think about it, 500Mb/s can fill a 50 packet queue with 1500 byte packets in 1.2ms. Probably not enough time for the thread scheduler to react.

            Do limiters have codel? 2.4 has it but not in the GUI.

            Please correct any obvious misinformation in my posts.
            -Not a professional; an arrogant ignoramous.

            1 Reply Last reply Reply Quote 0
            • H
              Harvy66
              last edited by

              @Nullity:

              @Harvy66:

              Just enable Codel Active queue for your queue discipline, it's sized by time instead of number of packets. The default is 50 packets. Just think about it, 500Mb/s can fill a 50 packet queue with 1500 byte packets in 1.2ms. Probably not enough time for the thread scheduler to react.

              Do limiters have codel? 2.4 has it but not in the GUI.

              He mentioned something about a "default" bucket and I guess I assumed that was in reference to the default queue, since a lot of people conflate "limiters" and "shaping". I've never used limiters, but I thought the only buckets you could use are by IP mask. I didn't think there was a "default".

              1 Reply Last reply Reply Quote 0
              • D
                d4ab9f
                last edited by

                @Harvy66:

                Just enable Codel Active queue for your queue discipline, it's sized by time instead of number of packets. The default is 50 packets. Just think about it, 500Mb/s can fill a 50 packet queue with 1500 byte packets in 1.2ms. Probably not enough time for the thread scheduler to react.

                50 packets * 1500 bytes = 75000 bytes
                in one second it sends 500 * 1024 * 1024 / 8 = 65536000bytes
                so it should be able to fill 75000 bytes in 75000/65536000 seconds or 0.001144 seconds = 1.1 ms

                Not sure what I missed to get to 1.2 ms
                EDIT: I used Mebibit instead of Megabit, my bad. 75000/62500000 makes it exact 1.2 ms.

                How much time is needed for the thread scheduler to react? Should I increase queue size to give it enough time?
                I was under impression that GUI allows to change queue size but not time.

                1 Reply Last reply Reply Quote 0
                • N
                  Nullity
                  last edited by

                  @d4ab9f:

                  @Harvy66:

                  Just enable Codel Active queue for your queue discipline, it's sized by time instead of number of packets. The default is 50 packets. Just think about it, 500Mb/s can fill a 50 packet queue with 1500 byte packets in 1.2ms. Probably not enough time for the thread scheduler to react.

                  50 packets * 1500 bytes = 75000 bytes
                  in one second it sends 500 * 1024 * 1024 / 8 = 65536000bytes
                  so it should be able to fill 75000 bytes in 75000/65536000 seconds or 0.001144 seconds = 1.1 ms

                  Not sure what I missed to get to 1.2 ms
                  EDIT: I used Mebibit instead of Megabit, my bad. 75000/62500000 makes it exact 1.2 ms.

                  You should likely focus on network latency and queue depth.

                  Beware changing queue depth since (unlike codel) the depth is set by the number of packets rather than the total size of the queue. Packets can range in size from ~72 to 1500 bytes, which impacts potential worst-case latency exponentially.

                  Please correct any obvious misinformation in my posts.
                  -Not a professional; an arrogant ignoramous.

                  1 Reply Last reply Reply Quote 0
                  • H
                    hiryu
                    last edited by

                    Wow, this thread really came alive over the weekend.

                    For the more experienced folks, please correct me where I'm wrong (or let me know that I'm right).

                    As far as I can tell about the bucket size… It seems this is the size of the hash map used for dynamic queues. Ie, the mask setting. The default setting is 64, so if you have more than 64 IP's active at once, it could make sense to increase this. However, I'm not sure how the IP's are hashed, etc. I chose 1024, I don't have anything near that, but I have plenty of memory.

                    As far as queue sizing goes, if you have your queues insufficiently sized, you'll of course have dropped packets.If your queue is too large relative to your connection's speed, you'll get queuing delay. I ultimately tuned this by (ab)using a speed test over while incrementing by 50 each time until I started hitting my max speed. This started to occur around the range of 350-400 for me. I then went a little higher to give head room (512). But I think I'll bring it down to 400 as I suspect for cases where I'm using less than my max bandwidth I'll hit queuing delay.

                    My understandings on the above are largely from reading the ipfw manpage in FreeBSD as I was unable to find anything very concrete via Google.

                    Regarding queuing delay in dummynet... It seems like this is something that could potentially be fixed by tuning the dummynet code itself in the kernel to be more intelligent, however, this is entirely conjecture on my part.

                    Since it's come up... I came across codel while looking into traffic shaping... I read something that you only want to use codel only when dealing with TCP. Is this correct? Does a queue size need to be set in altq when using codel at all or does codel just sort of start with a default value and tune from there?

                    From what I've read, PRIQ doesn't need you to set bandwidth, but pfsense requires a bandwidth setting when using PRIQ nonetheless. Am I wrong on this or this is a bug?

                    1 Reply Last reply Reply Quote 0
                    • H
                      hiryu
                      last edited by

                      On 2nd thought, Is queuing delay something that can only really occur if you begin exceeding your limiter's bandwidth allocation?

                      Also, according to the ipfw manpage (so I'm sure this is also available in pfsense though I haven't experimented with it yet), you can specify queue size in terms of KB rather in number of packets. Does anyone know if specifying a size in KB puts an actual limit on the max combined size of the data stored in the queue? Or if specifying the size in KB is still used to calculate the queue length in terms of packets (by perhaps dividing the size in KB by 1500, etc).

                      When I was referring to "default bucket" above, I meant the default bucket size (which is why the sentence read "default bucket and queue sizes ").

                      To elaborate on my previous comment, I'm currently assuming that each IP is hashed to a bucket, where either each bucket gets its own queue or each IP gets its own queue, depending on how collisions are handled (I'm guessing probably the latter).

                      1 Reply Last reply Reply Quote 0
                      • N
                        Nullity
                        last edited by

                        @hiryu:

                        On 2nd thought, Is queuing delay something that can only really occur if you begin exceeding your limiter's bandwidth allocation?

                        Also, according to the ipfw manpage (so I'm sure this is also available in pfsense though I haven't experimented with it yet), you can specify queue size in terms of KB rather in number of packets. Does anyone know if specifying a size in KB puts an actual limit on the max combined size of the data stored in the queue? Or if specifying the size in KB is still used to calculate the queue length in terms of packets (by perhaps dividing the size in KB by 1500, etc).

                        When I was referring to "default bucket" above, I meant the default bucket size (which is why the sentence read "default bucket and queue sizes ").

                        To elaborate on my previous comment, I'm currently assuming that each IP is hashed to a bucket, where either each bucket gets its own queue or each IP gets its own queue, depending on how collisions are handled (I'm guessing probably the latter).

                        You may be right about that because I was blindly talking about pf/ALTQ (queues) rather than ipfw (limiters).

                        Though, you may need to resort to the command-line to achieve those subtle changes. I dunno if the GUI is capable.

                        Generally, pfSense works and does precisely what you configure it to do… If you can do some more definitive trouble-shooting to pin-point where the problem lies, that'd be great. Sadly, pfSense has given me few problems that weren't caused by my ignorance, so I assume most problems like this are self-inflicted. :(

                        Please correct any obvious misinformation in my posts.
                        -Not a professional; an arrogant ignoramous.

                        1 Reply Last reply Reply Quote 0
                        • D
                          d4ab9f
                          last edited by

                          @hiryu:

                          Wow, this thread really came alive over the weekend.

                          For the more experienced folks, please correct me where I'm wrong (or let me know that I'm right).

                          As far as I can tell about the bucket size… It seems this is the size of the hash map used for dynamic queues. Ie, the mask setting. The default setting is 64, so if you have more than 64 IP's active at once, it could make sense to increase this. However, I'm not sure how the IP's are hashed, etc. I chose 1024, I don't have anything near that, but I have plenty of memory.

                          As far as queue sizing goes, if you have your queues insufficiently sized, you'll of course have dropped packets.If your queue is too large relative to your connection's speed, you'll get queuing delay. I ultimately tuned this by (ab)using a speed test over while incrementing by 50 each time until I started hitting my max speed. This started to occur around the range of 350-400 for me. I then went a little higher to give head room (512). But I think I'll bring it down to 400 as I suspect for cases where I'm using less than my max bandwidth I'll hit queuing delay.

                          Where did you configure bucket size and queue size?
                          The only think I see through GUI is 'Queue Limit' and 'TBR Size'
                          I tried to change 'Queue Limit' but any value in the box causes error "Filter Reload
                              There were error(s) loading the rules: /tmp/rules.debug:43: syntax error - The line in question reads [43]: altq on em0 codelq ( qlimit 400 ) bandwidth 450Mb queue "

                          Limit450queue.JPG
                          Limit450queue.JPG_thumb

                          1 Reply Last reply Reply Quote 0
                          • H
                            hiryu
                            last edited by

                            Looking at your screenshot, it seems you're on the priority queue settings. This is for any limiters you've created under the limiter "tab".

                            1 Reply Last reply Reply Quote 0
                            • D
                              d4ab9f
                              last edited by

                              @hiryu:

                              Looking at your screenshot, it seems you're on the priority queue settings. This is for any limiters you've created under the limiter "tab".

                              I can't find a way to specify codel to limiter from "limiters"

                              1 Reply Last reply Reply Quote 0
                              • First post
                                Last post
                              Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.