Traffic Shaper - Queue Length and Dropped Packets
-
Aside from ECN, or perhaps an prematurely undersized TCP RWIN, how does modern TCP work without dropped packets? Citing an RFC would be preferable.
The answer to why dropped packets would happen during low utilization but not during high utilization is still unanswered.
-
TCP has RFCs, but the congestion control typically control does not.
Win8 uses CTCP by default http://en.wikipedia.org/wiki/Compound_TCP
The aim is to keep their sum approximately constant, at what the algorithm estimates is the path's bandwidth-delay product. In particular, when queueing is detected, the delay-based window is reduced by the estimated queue size to avoid the problem of "persistent congestion" reported for FAST and Vegas.
Another one out there http://en.wikipedia.org/wiki/TCP_Vegas
TCP Vegas detects congestion at an incipient stage based on increasing Round-Trip Time (RTT) values of the packets in the connection
-
Interesting. So if his LAN clients were running Windows 8, he might see no dropped packets?
I thought the most widely adopted algorithms were CUBIC (Linux default) and NewReno (pfSense default). I am painfully ignorant of Windows nowadays.
-
It is possible to never see dropped packets, but they are bound to happen extreme circumstances. I typically go days with no dropped packets in any queue for either direction. I have been seeding torrent quite hard lately, so I may have some more than I used to, but I have at least seen once where I went a whole month with zero dropped packets, and that included at least a few times that I maxed out my connection, while still maintaining low pings.
-
Example
pfTop: Up Queue 1-10/10, View: queue QUEUE BW SCH PR PKTS BYTES DROP_P DROP_B QLEN BORR SUSP P/S B/S root_igb0 98M hfsc 0 0 0 0 0 0 qACK 0 hfsc 32M 1872M 0 0 0 qDefault 14M hfsc 591M 776G 0 0 0 qHigh 29M hfsc 2137K 193M 0 0 0 qNormal 29M hfsc 1244K 745M 0 0 0 root_igb1 95M hfsc 0 0 0 0 0 0 qDefault 14M hfsc 289M 53G 0 0 0 qHigh 28M hfsc 1633K 260M 0 0 0 qNormal 28M hfsc 48M 70G 0 0 0 qACK 0 hfsc 108M 6844M 0 0 0
Length is pretty much always at 0
-
What network layers is the "QueueDrops" graph aware of?
Unless the queue is tracking the TCP layer, it could not know about missing/errored TCP segments. The only drops the graph would be logging would be IP packets (or ethernet frames?) that overflow the queue, and your AQM flow-controls the TCP streams, keeping them from overflowing the queue.
Assuming the above is true…
tuffcalc: Perhaps setup a high-priority queue for non-TCP only. Since a sender (LAN client) can sense congestion/flow control with TCP streams (via AQM/traffic-shaping perhaps), they most likely will not be dropped. Non-TCP has no flow-control so you can only delay or drop, so if you want less drops it is probably best to let the non-TCP packets preempt the resilient TCP streams. I am assuming most non-TCP is most likely delay-sensitive.Referring to Egress.
I have wondered if non-TCP should always be highest priority on ingress, since those packets are already there. Dropping them is just wasted bandwidth.
-
Drops is any packets that the queue drops because it becomes full. In my case, I am using codel, so it will start dropping packets when more than 5ms of packets get backlogged. Since I have codel applied to all queues on all interfaces, except qACK, at no point in the amount of data transfered has it dropped any packets
Packets could be dropped by other hops but at no point did PFSense drop anything. Packets will pretty much only be dropped for hops that go from a fast link to a slow link. The first two that come to mind is my ISP going from 10Gb+ down to my 100Mb connection and from my 1Gb LAN to my 100Mb uplink.
I have seen codel on my WAN drop packets, but only when BitTorrent ramped up really fast. I have also seen codel on my LAN drop packets, but only during similar situations.
If packet-drops where the only way modern TCP stacks used to detect congestion, I should always be seeing some dropped packets any time my connection is at 100% in either direction, because at those moments, PFSense is the bottleneck.
-
Drops is any packets that the queue drops because it becomes full. In my case, I am using codel, so it will start dropping packets when more than 5ms of packets get backlogged. Since I have codel applied to all queues on all interfaces, except qACK, at no point in the amount of data transfered has it dropped any packets
Packets could be dropped by other hops but at no point did PFSense drop anything. Packets will pretty much only be dropped for hops that go from a fast link to a slow link. The first two that come to mind is my ISP going from 10Gb+ down to my 100Mb connection and from my 1Gb LAN to my 100Mb uplink.
I have seen codel on my WAN drop packets, but only when BitTorrent ramped up really fast. I have also seen codel on my LAN drop packets, but only during similar situations.
If packet-drops where the only way modern TCP stacks used to detect congestion, I should always be seeing some dropped packets any time my connection is at 100% in either direction, because at those moments, PFSense is the bottleneck.
Part of the confusion here is that I cannot tell when we are referring to drops caused by a full queue and drops by TCP, perhaps related to artificial stream throttling by a traffic-shaper. You are seeing no TCP drops because you are looking at QueueDrops. Run a packet sniffer if you want to see TCP drop/reorders/dupes.
TCP packet drops (congestion control) are not the only way to rate-limit, but flow control (TCP sliding window) is set at the receiver unlike congestion control.Let's say Vegas or C-TCP was employed, how would a router know that sender used an algorithm that could modify the Congestion Window without drops? Packet drops are supported by all TCP congestion control algos, the non-drop method is not.
OP: I just ran into a problem when I mixed a queue with TCP and non-TCP packets and used CoDel on the queue. Actually, no queue-drops registered, but CoDel was dropping ~30% of the ping packets being sent through. I now make sure to only use CoDel on queues exclusively filled with TCP only.
-
OP: I just ran into a problem when I mixed a queue with TCP and non-TCP packets and used CoDel on the queue. Actually, no queue-drops registered, but CoDel was dropping ~30% of the ping packets being sent through. I now make sure to only use CoDel on queues exclusively filled with TCP only.
So you have a queue for both TCP and ICMP and say ICMP packet-loss but nothing registered with the queue? Are you sure the packets are being dropped by PFSense and not the connection to your ISP?
-
I would be very interested in bufferbloat-reduction reports using either the rrul test suite or the new dslreports tool regarding hfsc + fairqueue + codel, in particular.
http://www.internetsociety.org/blog/tech-matters/2015/04/measure-your-bufferbloat-new-browser-based-tool-dslreports
-
OP: I just ran into a problem when I mixed a queue with TCP and non-TCP packets and used CoDel on the queue. Actually, no queue-drops registered, but CoDel was dropping ~30% of the ping packets being sent through. I now make sure to only use CoDel on queues exclusively filled with TCP only.
So you have a queue for both TCP and ICMP and say ICMP packet-loss but nothing registered with the queue? Are you sure the packets are being dropped by PFSense and not the connection to your ISP?
With CoDel on a mixed protocol queue, during a saturated upload test, if my ping packets went through said queue they had CoDel-like latency (30-60ms) but had ~30% of the pings unreplied. No QueueDrops registered. Queue averaged about 3 packets.
If CoDel was disabled on the queue, ping latency was 600ms with 0% ping packet loss. Queue averaged about 30 packets.When I assigned pings to another queue, I had 0% packet loss and the same latency.
I did not run tcpdump to see where the packet was last seen.
-
So it sounds like the queue statistics may not be reporting drops correctly when using codel?
-
So it sounds like the queue statistics may not be reporting drops correctly when using codel?
That it what I was thinking as well.
I wonder if it should register. The queue never overflows, technically, right?
-
I guess it could be possible that the way FreeBSD records dropped packets is if the queue responds with a "full" return code when attempting to enqueue from the wire to the queue. If that is the only way for it to track drops, when you'll never see drops that are internal to codel. That would also mean those times I did see drops, it was because the queue actually got full and not because of max latency in queue.
-
-
So it sounds like the queue statistics may not be reporting drops correctly when using codel?
That it what I was thinking as well.
I wonder if it should register. The queue never overflows, technically, right?
I have this issue as well. Codel drops never register.
Is it an issue? From a queue perspective, the queue is never overflowed (technically?) when codel is implemented.
More to the point, is this discrepancy causing any problems?
-
If you're trying to diagnose why a packet came in the WAN but not out the LAN, it would be nice to know your dropped packet count went up by one instead of it just disappearing with no indication anywhere as to why.
-
If you're trying to diagnose why a packet came in the WAN but not out the LAN, it would be nice to know your dropped packet count went up by one instead of it just disappearing with no indication anywhere as to why.
Like I mentioned earlier in the thread, I think queue drops are only logged as such if the queue is the reason for the drop.
Other drops are not, but should they be? I dunno. I would prefer that all the rrdtool graphs in pfSense were more reliable, but since I do not use those graphs for anything important, I kinda don't care…
I would probably use tcpdump to solve the situation you describe, because I am unclear of the semantics involved in the interface/queue stats.