HFSC basic minimums & maximums
-
I think part of your config's problem is that your 10Mbit limit limits everything, leaving no room for other traffic. Always leave some bandwidth for new, high-priority traffic.
I thought the purpose of HFSC with defining minimums, was that minimums would be honoured if there is data in that queue; if no data in that queue then use that available bandwidth for other queues? This is why I have the percentages in my Bandwidth fields of the leaf nodes. Reserving 1Mbps of throughput sounds counter-intuitive. Should minimums be defined elsewhere instead?
Especially with HFSC link-share, rather than using percentages you can just use normal values since they are both just proportioning bandwidth.
I understand HFSC only uses normal values, and strips additional characters. I'm keeping the percentages there for more of a visual reference to me, or anyone who may come in after me.
- WAN - Codel on qInternet doesn't affect anything. qInternet doesn't actually have traffic in it, it's just the parent "queue". It's not really a real queue, just metadata about how to split the bandwidth. Only leaf queues get traffic.
- WAN - You technically do not need to set the upper limit on your WAN since you configured the interface to be limited to 10Mb. Not to say that the interface may limit differently(for better or worse) than how HFSC's upperlimit will limit.
- LAN/WAN - Enable Codel on all of your child queues qHigh/qNormal/qLow for both WAN and LAN.
All good advice, thanks! Although I need to set the upperlimit on WAN to throttle my upload.
So my question still remains:
- Have I configured the minimum guaranteed bandwidth for each queue correctly? (see my OP for config) -
I think part of your config's problem is that your 10Mbit limit limits everything, leaving no room for other traffic. Always leave some bandwidth for new, high-priority traffic.
I thought the purpose of HFSC with defining minimums, was that minimums would be honoured if there is data in that queue; if no data in that queue then use that available bandwidth for other queues? This is why I have the percentages in my Bandwidth fields of the leaf nodes. Reserving 1Mbps of throughput sounds counter-intuitive. Should minimums be defined elsewhere instead?
Refer to the linksys link I posted to see the differences/problems with download. Depending on latency, you must allow for some headroom because the sender always takes time to respond to your request to slow down. The time between request and actually receiving the slower rate can vary (10ms, 100ms, 1000ms?). Until the traffic is slowed, you effectively have zero bandwidth for other traffic, which is most impactful on short-lived connections like web browsing.
Yeah, you are defining minimums but that is only as the traffic leaves the interface. Incoming traffic is unpredictable and is only "controlled" as a side-effect of controlling output.
Ultimately, just intelligently try different methods of traffic-shaping and see what works.
-
… Depending on latency, you must allow for some headroom because the sender always takes time to respond to your request to slow down. ... Until the traffic is slowed, you effectively have zero bandwidth for other traffic, which is most impactful on short-lived connections like web browsing.
Ah yes, I had forgotten about the response time to allow for slowdown. Good call, thanks! I'll futz with the overhead and enable Codel on my leaf nodes and see how that goes.
-
I think part of your config's problem is that your 10Mbit limit limits everything, leaving no room for other traffic. Always leave some bandwidth for new, high-priority traffic. For example, you could set your Default queue to 9Mbit upper-limit, leaving 1Mbit headroom for any other, non-defaulted traffic.
You might also try enabling codel on the WAN or simply set the queue length to 1 (or 0?). IMO, buffering on ingress, primarily when going from a slow link to a fast link, is completely useless. Why artificially delay a packet that can be transmitted? I dunno if pfSense is capable of bufferless traffic-shaping ("traffic-policing") though.
Delaying/dropping ingress is how your signal TCP to backoff. If you don't do this, your ISP will, and it will probably have massive bufferbloat. While shaping works best on egress, it does work on ingress quite well, just not nearly as "perfect" as it does on egress.
Traffic-policing causes that pesky saw-tooth pattern with TCP and does not like the gigabit bursts that networks. Until packet-pacing gets implemented industry wide, if you look at wireshark, what's really happening with a 5Mb/s Netflix stream is it's sending 1Gb microbursts several times per second. Depending on the policer, it may start dropping packets. This is why networks have buffers in the first place.
The real crazy is when you look at GPON, WIFI, or any other protocol that encapsulates multiple Ethernet frame into a super-frame. Even with a 10Mb/s connection, you may see 64KiB bursts of 1Gb/s with a 10Mb average.
In the end, you either have a natural limit or an artificial limit. You have zero control over the natural limit, but you can control the characteristics of the artificial limit.
-
Indeed Harvy66, the rate of flow on TCP is not just controlled by the sender, ingress shaping works exactly as you said, it drops packets or delays them to simulate congestion and it will cause the sender to back off to slow things down. TCP is a self regulating protocol, it works very well in single streams, but gets wrecked when applications abuse it, e.g. torrents and steam.
Congestion window is the prime driver of the flow of data. However the Congestion window is capped by the sender's send window and and the recipients receive window, all the ingress shaper has to do is either deliberately drop/delay packets to make the congestion window shrink itself, or it can manipulate the packets themselves to pretend the recipient has a smaller receive window (the latter is how some isp's apply traffic shaping).
-
You do not need to delay a packet to delay the returning ACK, right? A delayed ACK would trigger congestion control. ECN could also simulate congestion without actually buffering any packet.
I dunno if any ingress shaping setups use this no-buffer idea though. On download, the goal is to slow the sender below our maximum, but sadly we must simulate a slower link which in-turn simulates congestion when the buffer grows. It's not completely necessary to buffer though.
The only algorithm I know of that treats ingress & egress shaping differently is Cake, IIRC.
In some ways it's kinda neat that traffic-shaping has so many obvious improvements to come.
-
I think part of your config's problem is that your 10Mbit limit limits everything, leaving no room for other traffic. Always leave some bandwidth for new, high-priority traffic. For example, you could set your Default queue to 9Mbit upper-limit, leaving 1Mbit headroom for any other, non-defaulted traffic.
You might also try enabling codel on the WAN or simply set the queue length to 1 (or 0?). IMO, buffering on ingress, primarily when going from a slow link to a fast link, is completely useless. Why artificially delay a packet that can be transmitted? I dunno if pfSense is capable of bufferless traffic-shaping ("traffic-policing") though.
…
Traffic-policing causes that pesky saw-tooth pattern with TCP and does not like the gigabit bursts that networks. Until packet-pacing gets implemented industry wide, if you look at wireshark, what's really happening with a 5Mb/s Netflix stream is it's sending 1Gb microbursts several times per second. Depending on the policer, it may start dropping packets. This is why networks have buffers in the first place.
...The saw-tooth problem is on egress, on ingress it may happen regardless of what's happening on egress.
On my connection, whenever I use any ingress shaping or policing I get saw-tooth. When I allow my ISP to do the rate-limiting I get a smooth bitrate. My latency increases from maybe 10ms to 25ms during full saturation without shaping. With shaping, the saw-toothing dropped my average bitrate below the already synthetically lowered bitrate (15%?) so I decided against ingress shaping. I only have 12Mbit so …
Edit: Fixed quote tags.
-
One thing I noticed Nullity, is to make the sender back of quick enough the packet is best to be dropped rather than delayed, which means low queue depths are important on low priority ingress queues to ensure packets get dropped quickly on those queues.
In an ideal world the congestion window would stop growing the moment the throughput is saturating the line, but it tends to actually continue to grow for a while after as it can get quite big before packets get dropped naturally.
This how I believe HFSC differs to the others, when I run a dslreports speedtest on HFSC (with a low priority queue) it moans about retransmissions, meaning I was dropping the packets. However even with those dropped packets the test still saturates the ALTQ pipe easily so there is no noticeable performance lost On the others like fairq.I get no warning about dropped packets during the test however my latency is more jittery and I can get packet loss on ssh and other higher priority stuff during the test. This suggests to me on ingress, delaying is not that effective compared to dropping packets to stimulate congestion.
Also I get no sawtooth affect, speed ramps up quickly and stays there assuming I am downloading from a good source.
-
One thing I noticed Nullity, is to make the sender back of quick enough the packet is best to be dropped rather than delayed, which means low queue depths are important on low priority ingress queues to ensure packets get dropped quickly on those queues.
In an ideal world the congestion window would stop growing the moment the throughput is saturating the line, but it tends to actually continue to grow for a while after as it can get quite big before packets get dropped naturally.
This how I believe HFSC differs to the others, when I run a dslreports speedtest on HFSC (with a low priority queue) it moans about retransmissions, meaning I was dropping the packets. However even with those dropped packets the test still saturates the ALTQ pipe easily so there is no noticeable performance lost On the others like fairq.I get no warning about dropped packets during the test however my latency is more jittery and I can get packet loss on ssh and other higher priority stuff during the test. This suggests to me on ingress, delaying is not that effective compared to dropping packets to stimulate congestion.
Also I get no sawtooth affect, speed ramps up quickly and stays there assuming I am downloading from a good source.
A "dropped" packet is just a packet that the sender received no ACK or duplicate ACKs for, right? There's nothing that forces us to actually drop that packet when our congestion is purely artificial. Or maybe there is… I'm a bit rusty in this area nowadays.
I think CoDel does some sort of intelligent rate-limiting calculations to keep bandwidth more consistent from whoever is transmitting when it experiences bufferbloat. I don't think HFSC cares about anything but transmitting packets "fairly" with regard to worst-case latency.
In general, low queue depth seems to be best. I struggle to think of scenerios where it isn't.
-
dropped packet inbound, which means the receiving computer will not report the chunk of data as arrived as pfsense will have prevented the packet been passed on, so the ack never gets sent back to the sender, as a result the sender will backoff as it will assume its congestion.