Queue length in LAN shaper
-
Pfsense 2.3.2
I have qp2p queue on LAN interface with queue limit of 50000 packets, but if I open Queue Status page, then it shows under heavy load that limit is 5000 and drops packets over 5000 limit, if I put 10000 (less then 50000) limit it behaves like it's 100, shows 10000 on Queue Status page and drops everything above 100. What am I doing wrong? -
Average packet size on the Internet is something like 600bytes. 50
0k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.
-
Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.
Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.
I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?
-
Why do you want a huge queue? What do you think it will improve?
Packet drops are a necessity of most TCP congestion control algorithms. It's perfectly normal.
-
@w0w:
Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.
Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.
I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?
Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing what Codel is already knows. Kind of like saying what RED or ECN implies.
-
Why do you want a huge queue? What do you think it will improve?
Packet drops are a necessity of most TCP congestion control algorithms. It's perfectly normal.
Just because I can. Why not? If it can not be set by any reason, then it should be limited.
-
@w0w:
Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.
Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.
I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?
Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing what Codel is already knows. Kind of like saying what RED or ECN implies.
I don't use plain codel. I use HFSC. Described bug-feature exists even with not enabled Codel in queue.
-
Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.
Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.
-
@w0w:
Average packet size on the Internet is something like 600bytes. 500k packets times 600 bytes * 8 bits per byte = 240mbits of queue. Even if you had a 1Gb connection, that would be 240ms of queue, which is horribly large.
Just check "Active Codel Queue" or whatever under the queue setting. It is already sized incredibly large but won't give you buffer bloat issues. And fyi, it is healthy for networks to drop packets, it is unhealthy to hold onto packets for long periods of time, it breaks all kinds of stuff.
I am using "Active Codel Queue" on main "qInternet" that consist of all queues together as I understand. But I can not find why queue length works very strange in this situation, if it autotuned, then why it not documented clearly?
Other than bandwidth, parent queue settings do not affect child queues as far as I know, which I could be wrong in some cases. "Codel" is a very specific algorithm that by definition auto-tunes. There is no reason to document it because anyone knowing what Codel is already knows. Kind of like saying what RED or ECN implies.
Good post. :)
Although I think OP's goal is nonsensical and should be fundamentally critiqued, I think your statement about the parent HFSC queue's CoDel/qlimit values not directly affecting the child-queues is accurate.
When viewing pftop, when you fill a child queue, the parent queue is not also filled, IIRC, so that seems to support our claims… OP may want to confirm that though. I'm not sure. -
Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.
Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.
This behavior of PF is not expected and same config worked without such problems on 2.2 version and I have asked simple question is it bug or feature.
If HFSC child queue length is auto-tuned now in some cases, OK then, but I can not find anything about it. And don't tell me about Codel, just forget it, this is NOT caused by Codel. I've done testing with NOT enabled Codel options and problem persists exactly the same way. -
@w0w:
Seems like you simply don't understand pfSense's traffic-shaping enough to be doing these tests, so you are expecting incorrect outcomes.
Reading the entire pfSense wiki and perhaps The book of pf, along with some HFSC tutorials and some CoDel tutorials would be a good start.
This behavior of PF is not expected and same config worked without such problems on 2.2 version and I have asked simple question is it bug or feature.
If HFSC child queue length is auto-tuned now in some cases, OK then, but I can not find anything about it. And don't tell me about Codel, just forget it, this is NOT caused by Codel. I've done testing with NOT enabled Codel options and problem persists exactly the same way.Post all details about your queues.
-
Attached.
-
Disable ECN.
Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?
-
Disable ECN.
Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?
Yes both are 50000. You can see it on the screenshot in previous post.
Disabled ECN, no effect. -
@w0w:
Disable ECN.
Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?
Yes both are 50000. You can see it on the screenshot in previous post.
Disabled ECN, no effect.So, it says 50000 until it's actually under load where it changes to 5000?
Try using pftop to view the queues' status. There have been many reported quirks with with the Status / Queues graphs.
Just so you know, queueing (adding additional delay) packets going from a low-bandwidth (ex. 100Mbit) to a high-bandwidth (ex. 1Gbit) is nonsensical. With 2134 MTU-sized packets being needlessly queued, you are adding 256 milliseconds of latency to your download stream (2134 × 1500 × 8 = 25608000 bits of your 100Mbit connection). I am happy to try and solve this strange qlimit quirk, but please understand that you are trying to do something that makes no sense.
-
@w0w:
Disable ECN.
Also, you have two qP2P (WAN & LAN, or upload & download, respectively). Are you sure both are set to 50000?
Yes both are 50000. You can see it on the screenshot in previous post.
Disabled ECN, no effect.So, it says 50000 until it's actually under load where it changes to 5000?
Try using pftop to view the queues' status. There have been many reported quirks with with the Status / Queues graphs.
Just so you know, queueing (adding additional delay) packets going from a low-bandwidth (ex. 100Mbit) to a high-bandwidth (ex. 1Gbit) is nonsensical. With 2134 MTU-sized packets being needlessly queued, you are adding 256 milliseconds of latency to your download stream (2134 × 1500 × 8 = 25608000 bits of your 100Mbit connection). I am happy to try and solve this strange qlimit quirk, but please understand that you are trying to do something that makes no sense.
Yes, it says 50000 until it's actually under load where it changes to 5000.
It's 300/300mbit connection and additional delay for p2p traffic is fully acceptable IMHO, if it's applied only when needed, not always.
If you have read the first post you know that problem persist with 10000 also and may be with 1000 packets length.
I 'll do further testing using pftop to eliminate possible GUI code flaws. -
Can you share a screen shot of the queue stats during an upload (WAN) load test?
-
Can you share a screen shot of the queue stats during an upload (WAN) load test?
Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.
-
Look what I have found.
pfctl -s queue -v -vqueue qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default ) [ pkts: 44769 bytes: 3136059 dropped pkts: 0 bytes: 0 ] [ qlength: 0/50000 ] [ measured: 116.1 packets/s, 68.58Kb/s ]
Does red means RED is enabled? This is the problem. I have not enabled it!
During load:
queue qP2P on igb1 bandwidth 45Mb qlimit 50000 hfsc( red ecn default ) [ pkts: 4326757 bytes: 6132296391 dropped pkts: 242 bytes: 353048 ] [ qlength: 4174/50000 ] [ measured: 25052.9 packets/s, 290.45Mb/s ]
SO it's drops actually above 5000 limit, but queue length is 50000 shown correctly this time. I think it's autotuned by RED? I can't find how GUI gets the 5000 actual limit but it looks the Right value.
-
@w0w:
Can you share a screen shot of the queue stats during an upload (WAN) load test?
Sorry, I can't find how to fill my outbound pipe with p2p traffic, it's just don't want to do it.
Sharing a popular Linux distribution torrent like Ubuntu or Debian is a sure-fire way to saturate your upload.