Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Queue length in LAN shaper

    Scheduled Pinned Locked Moved Traffic Shaping
    27 Posts 4 Posters 9.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • N
      Nullity
      last edited by

      Why do you want a huge queue? What do you think it will improve?

      Packet drops are a necessity of most TCP congestion control algorithms. It's perfectly normal.

      Please correct any obvious misinformation in my posts.
      -Not a professional; an arrogant ignoramous.

      1 Reply Last reply Reply Quote 0
      • H
        Harvy66
        last edited by

        @w0w:

        @Harvy66:

        Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.

        Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.

        I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?

        Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing  what Codel is already knows. Kind of like saying what RED or ECN implies.

        1 Reply Last reply Reply Quote 0
        • w0wW
          w0w
          last edited by

          @Nullity:

          Why do you want a huge queue? What do you think it will improve?

          Packet drops are a necessity of most TCP congestion control algorithms. It's perfectly normal.

          Just because I can. Why not? If it can not be set by any reason, then it should be limited.

          1 Reply Last reply Reply Quote 0
          • w0wW
            w0w
            last edited by

            @Harvy66:

            @w0w:

            @Harvy66:

            Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.

            Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.

            I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?

            Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing  what Codel is already knows. Kind of like saying what RED or ECN implies.

            I don't  use plain codel. I use HFSC. Described bug-feature exists even with not enabled Codel in queue.

            1 Reply Last reply Reply Quote 0
            • N
              Nullity
              last edited by

              Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.

              Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.

              Please correct any obvious misinformation in my posts.
              -Not a professional; an arrogant ignoramous.

              1 Reply Last reply Reply Quote 0
              • N
                Nullity
                last edited by

                @Harvy66:

                @w0w:

                @Harvy66:

                Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.

                Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.

                I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?

                Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing  what Codel is already knows. Kind of like saying what RED or ECN implies.

                Good post. :)

                Although I think OP's goal is nonsensical and should be fundamentally critiqued, I think your statement about the parent HFSC queue's CoDel/qlimit values not directly affecting the child-queues is accurate.
                When viewing pftop, when you fill a child queue, the parent queue is not also filled, IIRC, so that seems to support our claims… OP may want to confirm that though. I'm not sure.

                Please correct any obvious misinformation in my posts.
                -Not a professional; an arrogant ignoramous.

                1 Reply Last reply Reply Quote 0
                • w0wW
                  w0w
                  last edited by

                  @Nullity:

                  Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.

                  Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.

                  This behavior of PF is not expected and same config worked without such problems on 2.2 version and I have asked simple question is it bug or feature.
                  If HFSC child queue length  is auto-tuned now in some cases, OK then, but I can not find anything about it. And don't tell me about Codel, just forget it, this is NOT caused by Codel. I've done testing with NOT enabled Codel options and problem persists exactly the same way.

                  1 Reply Last reply Reply Quote 0
                  • N
                    Nullity
                    last edited by

                    @w0w:

                    @Nullity:

                    Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.

                    Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.

                    This behavior of PF is not expected and same config worked without such problems on 2.2 version and I have asked simple question is it bug or feature.
                    If HFSC child queue length  is auto-tuned now in some cases, OK then, but I can not find anything about it. And don't tell me about Codel, just forget it, this is NOT caused by Codel. I've done testing with NOT enabled Codel options and problem persists exactly the same way.

                    Post all details about your queues.

                    Please correct any obvious misinformation in my posts.
                    -Not a professional; an arrogant ignoramous.

                    1 Reply Last reply Reply Quote 0
                    • w0wW
                      w0w
                      last edited by

                      Attached.

                      qp2p.jpg
                      qp2p.jpg_thumb
                      qinternet.jpg
                      qinternet.jpg_thumb
                      no_traffic.jpg
                      no_traffic.jpg_thumb
                      load.jpg
                      load.jpg_thumb

                      1 Reply Last reply Reply Quote 0
                      • N
                        Nullity
                        last edited by

                        Disable ECN.

                        Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?

                        Please correct any obvious misinformation in my posts.
                        -Not a professional; an arrogant ignoramous.

                        1 Reply Last reply Reply Quote 0
                        • w0wW
                          w0w
                          last edited by

                          @Nullity:

                          Disable ECN.

                          Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?

                          Yes both are 50000. You can see it on the screenshot in previous post.
                          Disabled ECN, no effect.

                          1 Reply Last reply Reply Quote 0
                          • N
                            Nullity
                            last edited by

                            @w0w:

                            @Nullity:

                            Disable ECN.

                            Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?

                            Yes both are 50000. You can see it on the screenshot in previous post.
                            Disabled ECN, no effect.

                            So, it says 50000 until it's actually under load where it changes to 5000?

                            Try using pftop to view the queues' status. There have been many reported quirks with with the Status / Queues graphs.

                            Just so you know, queueing (adding additional delay) packets going from a low-bandwidth (ex. 100Mbit) to a high-bandwidth (ex. 1Gbit) is nonsensical. With 2134 MTU-sized packets being needlessly queued, you are adding 256 milliseconds of latency to your download stream (2134 × 1500 × 8 = 25608000 bits of your 100Mbit connection). I am happy to try and solve this strange qlimit quirk, but please understand that you are trying to do something that makes no sense.

                            Please correct any obvious misinformation in my posts.
                            -Not a professional; an arrogant ignoramous.

                            1 Reply Last reply Reply Quote 0
                            • w0wW
                              w0w
                              last edited by

                              @Nullity:

                              @w0w:

                              @Nullity:

                              Disable ECN.

                              Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?

                              Yes both are 50000. You can see it on the screenshot in previous post.
                              Disabled ECN, no effect.

                              So, it says 50000 until it's actually under load where it changes to 5000?

                              Try using pftop to view the queues' status. There have been many reported quirks with with the Status / Queues graphs.

                              Just so you know, queueing (adding additional delay) packets going from a low-bandwidth (ex. 100Mbit) to a high-bandwidth (ex. 1Gbit) is nonsensical. With 2134 MTU-sized packets being needlessly queued, you are adding 256 milliseconds of latency to your download stream (2134 × 1500 × 8 = 25608000 bits of your 100Mbit connection). I am happy to try and solve this strange qlimit quirk, but please understand that you are trying to do something that makes no sense.

                              Yes, it says 50000 until it's actually under load where it changes to 5000.

                              It's 300/300mbit connection and additional delay for p2p traffic is fully acceptable IMHO, if it's applied only when needed, not always.
                              If you  have read the first post you know that problem persist with 10000 also and may be with 1000 packets length.
                              I 'll do further testing using pftop to eliminate possible GUI  code flaws.

                              1 Reply Last reply Reply Quote 0
                              • N
                                Nullity
                                last edited by

                                Can you share a screen shot of the queue stats during an upload (WAN) load test?

                                Please correct any obvious misinformation in my posts.
                                -Not a professional; an arrogant ignoramous.

                                1 Reply Last reply Reply Quote 0
                                • w0wW
                                  w0w
                                  last edited by

                                  @Nullity:

                                  Can you share a screen shot of the queue stats during an upload (WAN) load test?

                                  Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.

                                  1 Reply Last reply Reply Quote 0
                                  • w0wW
                                    w0w
                                    last edited by

                                    Look what I have found.
                                    pfctl -s queue -v -v

                                    
                                    queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default )
                                      [ pkts:      44769  bytes:    3136059  dropped pkts:      0 bytes:      0 ]
                                      [ qlength:   0/50000 ]
                                      [ measured:   116.1 packets/s, 68.58Kb/s ]
                                    
                                    

                                    Does red means RED is enabled? This is the problem. I have not enabled it!

                                    During load:

                                    
                                    queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default )
                                      [ pkts:    4326757  bytes: 6132296391  dropped pkts:    242 bytes: 353048 ]
                                      [ qlength: 4174/50000 ]
                                      [ measured: 25052.9 packets/s, 290.45Mb/s ]
                                    
                                    

                                    SO it's drops actually above 5000 limit, but queue length is 50000 shown correctly this time. I think it's autotuned by RED? I can't find how GUI gets the 5000 actual limit but it looks the Right value.

                                    1 Reply Last reply Reply Quote 0
                                    • N
                                      Nullity
                                      last edited by

                                      @w0w:

                                      @Nullity:

                                      Can you share a screen shot of the queue stats during an upload (WAN) load test?

                                      Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.

                                      Sharing a popular Linux distribution torrent like Ubuntu or Debian is a sure-fire way to saturate your upload.

                                      Please correct any obvious misinformation in my posts.
                                      -Not a professional; an arrogant ignoramous.

                                      1 Reply Last reply Reply Quote 0
                                      • N
                                        Nullity
                                        last edited by

                                        @w0w:

                                        Look what I have found.
                                        pfctl -s queue -v -v

                                        
                                        queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default )
                                          [ pkts:      44769  bytes:    3136059  dropped pkts:      0 bytes:      0 ]
                                          [ qlength:   0/50000 ]
                                          [ measured:   116.1 packets/s, 68.58Kb/s ]
                                        
                                        

                                        Does red means RED is enabled? This is the problem. I have not enabled it!

                                        During load:

                                        
                                        queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default )
                                          [ pkts:    4326757  bytes: 6132296391  dropped pkts:    242 bytes: 353048 ]
                                          [ qlength: 4174/50000 ]
                                          [ measured: 25052.9 packets/s, 290.45Mb/s ]
                                        
                                        

                                        SO it's drops actually above 5000 limit, but queue length is 50000 shown correctly this time. I think it's autotuned by RED? I can't find how GUI gets the 5000 actual limit but it looks the Right value.

                                        Hmm, interesting results.

                                        You should have RED and ECN disabled (and any other AQMs) if your goal is to (needlessly) fill your over-sized queue. I thought you already disabled ECN… is the GUI reporting ECN as disabled when it's actually not?

                                        Again, like I said earlier, ultimately you never want an artificially inflated queue depth, so the fact that it doesn't is a good thing. Increasing latency is never the primary goal of a traffic-shaping setup. Usually increased latency is a side-effect of other goals like increasing bandwidth or decreasing latency of other, higher-priority traffic.

                                        A more telling test would be saturating upload, rather than download, so try to get some results from an upload test. Download and upload QoS are very different. A great tutorial regarding those differences and other QoS fundamentals is: http://www.linksysinfo.org/index.php?threads/qos-tutorial.68795/

                                        Just for fun, you might try enabling CoDel or setting the qlimit to 1 on qP2P as a test to see if your actual performance changes. There is no research I am aware of that shows traffic-shaping (queueing packets) superior to traffic-policing (no queueing packets) with regard to ingress/download when moving from a low-bandwidth network node (ex. 300Mbit) to a higher-bandwidth node (ex. 1Gbit). Here's a Cisco article comparing shaping and policing: http://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-policing/19645-policevsshape.html

                                        Please correct any obvious misinformation in my posts.
                                        -Not a professional; an arrogant ignoramous.

                                        1 Reply Last reply Reply Quote 0
                                        • H
                                          Harvy66
                                          last edited by

                                          @Nullity:

                                          @w0w:

                                          @Nullity:

                                          Can you share a screen shot of the queue stats during an upload (WAN) load test?

                                          Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.

                                          Sharing a popular Linux distribution torrent like Ubuntu or Debian is a sure-fire way to saturate your upload.

                                          Looking at their bandwidth settings, they're shaping to 300Mb/s. Seeding opensource torrents is not going to dent it. I seed almost 200 ISO images of the most popular Linux distros from my SSDs, and my average upload is about 10Mb/s on my 150Mb/s connection. They're already heavily seeded.

                                          Unless I jump on a hugely popular ISO withing a few hours of it coming out, I will never max my connection because there will already be so many seeders.

                                          1 Reply Last reply Reply Quote 0
                                          • KOMK
                                            KOM
                                            last edited by

                                            Take a Linux DVD ISO, rename it to Netflix_Stranger_Things_Season_2_Leak.zip, shove it up TPB and watch your bandwidth die  ;D

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.