Queue length in LAN shaper



  • Pfsense 2.3.2
    I have qp2p queue on LAN interface with queue limit of 50000 packets, but if I open Queue Status page, then it shows under heavy load that limit is 5000 and drops packets over 5000 limit, if I put 10000 (less then 50000) limit it behaves like it's 100, shows 10000 on Queue Status page and drops everything above 100. What am I doing wrong?



  • Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.

    Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.



  • @Harvy66:

    Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.

    Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.

    I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?



  • Why do you want a huge queue? What do you think it will improve?

    Packet drops are a necessity of most TCP congestion control algorithms. It's perfectly normal.



  • @w0w:

    @Harvy66:

    Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.

    Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.

    I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?

    Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing  what Codel is already knows. Kind of like saying what RED or ECN implies.



  • @Nullity:

    Why do you want a huge queue? What do you think it will improve?

    Packet drops are a necessity of most TCP congestion control algorithms. It's perfectly normal.

    Just because I can. Why not? If it can not be set by any reason, then it should be limited.



  • @Harvy66:

    @w0w:

    @Harvy66:

    Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.

    Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.

    I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?

    Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing  what Codel is already knows. Kind of like saying what RED or ECN implies.

    I don't  use plain codel. I use HFSC. Described bug-feature exists even with not enabled Codel in queue.



  • Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.

    Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.



  • @Harvy66:

    @w0w:

    @Harvy66:

    Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.

    Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.

    I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?

    Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing  what Codel is already knows. Kind of like saying what RED or ECN implies.

    Good post. :)

    Although I think OP's goal is nonsensical and should be fundamentally critiqued, I think your statement about the parent HFSC queue's CoDel/qlimit values not directly affecting the child-queues is accurate.
    When viewing pftop, when you fill a child queue, the parent queue is not also filled, IIRC, so that seems to support our claims… OP may want to confirm that though. I'm not sure.



  • @Nullity:

    Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.

    Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.

    This behavior of PF is not expected and same config worked without such problems on 2.2 version and I have asked simple question is it bug or feature.
    If HFSC child queue length  is auto-tuned now in some cases, OK then, but I can not find anything about it. And don't tell me about Codel, just forget it, this is NOT caused by Codel. I've done testing with NOT enabled Codel options and problem persists exactly the same way.



  • @w0w:

    @Nullity:

    Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.

    Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.

    This behavior of PF is not expected and same config worked without such problems on 2.2 version and I have asked simple question is it bug or feature.
    If HFSC child queue length  is auto-tuned now in some cases, OK then, but I can not find anything about it. And don't tell me about Codel, just forget it, this is NOT caused by Codel. I've done testing with NOT enabled Codel options and problem persists exactly the same way.

    Post all details about your queues.



  • Attached.










  • Disable ECN.

    Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?



  • @Nullity:

    Disable ECN.

    Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?

    Yes both are 50000. You can see it on the screenshot in previous post.
    Disabled ECN, no effect.



  • @w0w:

    @Nullity:

    Disable ECN.

    Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?

    Yes both are 50000. You can see it on the screenshot in previous post.
    Disabled ECN, no effect.

    So, it says 50000 until it's actually under load where it changes to 5000?

    Try using pftop to view the queues' status. There have been many reported quirks with with the Status / Queues graphs.

    Just so you know, queueing (adding additional delay) packets going from a low-bandwidth (ex. 100Mbit) to a high-bandwidth (ex. 1Gbit) is nonsensical. With 2134 MTU-sized packets being needlessly queued, you are adding 256 milliseconds of latency to your download stream (2134 × 1500 × 8 = 25608000 bits of your 100Mbit connection). I am happy to try and solve this strange qlimit quirk, but please understand that you are trying to do something that makes no sense.



  • @Nullity:

    @w0w:

    @Nullity:

    Disable ECN.

    Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?

    Yes both are 50000. You can see it on the screenshot in previous post.
    Disabled ECN, no effect.

    So, it says 50000 until it's actually under load where it changes to 5000?

    Try using pftop to view the queues' status. There have been many reported quirks with with the Status / Queues graphs.

    Just so you know, queueing (adding additional delay) packets going from a low-bandwidth (ex. 100Mbit) to a high-bandwidth (ex. 1Gbit) is nonsensical. With 2134 MTU-sized packets being needlessly queued, you are adding 256 milliseconds of latency to your download stream (2134 × 1500 × 8 = 25608000 bits of your 100Mbit connection). I am happy to try and solve this strange qlimit quirk, but please understand that you are trying to do something that makes no sense.

    Yes, it says 50000 until it's actually under load where it changes to 5000.

    It's 300/300mbit connection and additional delay for p2p traffic is fully acceptable IMHO, if it's applied only when needed, not always.
    If you  have read the first post you know that problem persist with 10000 also and may be with 1000 packets length.
    I 'll do further testing using pftop to eliminate possible GUI  code flaws.



  • Can you share a screen shot of the queue stats during an upload (WAN) load test?



  • @Nullity:

    Can you share a screen shot of the queue stats during an upload (WAN) load test?

    Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.



  • Look what I have found.
    pfctl -s queue -v -v

    
    queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default )
      [ pkts:      44769  bytes:    3136059  dropped pkts:      0 bytes:      0 ]
      [ qlength:   0/50000 ]
      [ measured:   116.1 packets/s, 68.58Kb/s ]
    
    

    Does red means RED is enabled? This is the problem. I have not enabled it!

    During load:

    
    queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default )
      [ pkts:    4326757  bytes: 6132296391  dropped pkts:    242 bytes: 353048 ]
      [ qlength: 4174/50000 ]
      [ measured: 25052.9 packets/s, 290.45Mb/s ]
    
    

    SO it's drops actually above 5000 limit, but queue length is 50000 shown correctly this time. I think it's autotuned by RED? I can't find how GUI gets the 5000 actual limit but it looks the Right value.



  • @w0w:

    @Nullity:

    Can you share a screen shot of the queue stats during an upload (WAN) load test?

    Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.

    Sharing a popular Linux distribution torrent like Ubuntu or Debian is a sure-fire way to saturate your upload.



  • @w0w:

    Look what I have found.
    pfctl -s queue -v -v

    
    queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default )
      [ pkts:      44769  bytes:    3136059  dropped pkts:      0 bytes:      0 ]
      [ qlength:   0/50000 ]
      [ measured:   116.1 packets/s, 68.58Kb/s ]
    
    

    Does red means RED is enabled? This is the problem. I have not enabled it!

    During load:

    
    queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default )
      [ pkts:    4326757  bytes: 6132296391  dropped pkts:    242 bytes: 353048 ]
      [ qlength: 4174/50000 ]
      [ measured: 25052.9 packets/s, 290.45Mb/s ]
    
    

    SO it's drops actually above 5000 limit, but queue length is 50000 shown correctly this time. I think it's autotuned by RED? I can't find how GUI gets the 5000 actual limit but it looks the Right value.

    Hmm, interesting results.

    You should have RED and ECN disabled (and any other AQMs) if your goal is to (needlessly) fill your over-sized queue. I thought you already disabled ECN… is the GUI reporting ECN as disabled when it's actually not?

    Again, like I said earlier, ultimately you never want an artificially inflated queue depth, so the fact that it doesn't is a good thing. Increasing latency is never the primary goal of a traffic-shaping setup. Usually increased latency is a side-effect of other goals like increasing bandwidth or decreasing latency of other, higher-priority traffic.

    A more telling test would be saturating upload, rather than download, so try to get some results from an upload test. Download and upload QoS are very different. A great tutorial regarding those differences and other QoS fundamentals is: http://www.linksysinfo.org/index.php?threads/qos-tutorial.68795/

    Just for fun, you might try enabling CoDel or setting the qlimit to 1 on qP2P as a test to see if your actual performance changes. There is no research I am aware of that shows traffic-shaping (queueing packets) superior to traffic-policing (no queueing packets) with regard to ingress/download when moving from a low-bandwidth network node (ex. 300Mbit) to a higher-bandwidth node (ex. 1Gbit). Here's a Cisco article comparing shaping and policing: http://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-policing/19645-policevsshape.html



  • @Nullity:

    @w0w:

    @Nullity:

    Can you share a screen shot of the queue stats during an upload (WAN) load test?

    Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.

    Sharing a popular Linux distribution torrent like Ubuntu or Debian is a sure-fire way to saturate your upload.

    Looking at their bandwidth settings, they're shaping to 300Mb/s. Seeding opensource torrents is not going to dent it. I seed almost 200 ISO images of the most popular Linux distros from my SSDs, and my average upload is about 10Mb/s on my 150Mb/s connection. They're already heavily seeded.

    Unless I jump on a hugely popular ISO withing a few hours of it coming out, I will never max my connection because there will already be so many seeders.



  • Take a Linux DVD ISO, rename it to Netflix_Stranger_Things_Season_2_Leak.zip, shove it up TPB and watch your bandwidth die  ;D



  • @Harvy66:

    @Nullity:

    @w0w:

    @Nullity:

    Can you share a screen shot of the queue stats during an upload (WAN) load test?

    Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.

    Sharing a popular Linux distribution torrent like Ubuntu or Debian is a sure-fire way to saturate your upload.

    Looking at their bandwidth settings, they're shaping to 300Mb/s. Seeding opensource torrents is not going to dent it. I seed almost 200 ISO images of the most popular Linux distros from my SSDs, and my average upload is about 10Mb/s on my 150Mb/s connection. They're already heavily seeded.

    Unless I jump on a hugely popular ISO withing a few hours of it coming out, I will never max my connection because there will already be so many seeders.

    Do you know of any sure-fire way of saturating a 300Mbit upload? Maybe simultaneously upload a few huge files to MEGA? Their bandwidth is pretty damn good and they offer 50GB is space for free.

    I see no reason why OP couldn't just lower qP2P's bandwidth (with upper-limit), or the entire WAN interface's bandwidth, temporarily for testing purposes, which would make full upload saturation easy.



  • I like Nullity's idea to artificially limit the bandwidth to something low enough to saturate. I would not recommend going under 5Mb because other queuing issues start to crop up since 1500byte MTUs are relatively large as you near 1Mb.

    There are also public iperf servers. With a 150Mb connection myself, I could possibly be of help.



  • Ok, I've disabled ECN everywhere.

    
    queue   qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( default )
      [ pkts:    7536667  bytes: 10776855638  dropped pkts:      0 bytes:      0 ]
      [ qlength: 9404/50000 ]
    
    

    EDITED: I see now that pfctl shows me correct values everywhere and GUI is not! Still going on 5000 under load.
    Ok it just liiks like GUI is broken. Queue length is really filled up to the limit, I think.