HFSC & Codel
-
firewall software I ran codel and the wizard with HFSC and in DSLReports am now getting A+ for bufferbloat, this I can get only with pfsense other router never get these good results for me
-
Apologies for bumping an older thread, but I have a question directly related to Harvy66's reply.
I think I've more or less wrapped my head around how to have QoS working correctly, but I don't understand why Harvy has codel turned on for only some of his queue's… namely ACK and LowPri. Any guidance that someone could she would be appreciated.
Thanks!
~Spritz
-
Apologies for bumping an older thread, but I have a question directly related to Harvy66's reply.
I think I've more or less wrapped my head around how to have QoS working correctly, but I don't understand why Harvy has codel turned on for only some of his queue's… namely ACK and LowPri. Any guidance that someone could she would be appreciated.
Thanks!
~Spritz
An over-simplification would be that small queues (CoDel) drops packets in an effort to keep latency low while a large queue could eat random bursts while dropping no packets but latency would increase/fluctuate. Certain traffic like streaming or bulk downloads would probably prefer large buffers while VOIP or DNS would prefer small buffers.
You might try searching Google for "cisco buffer OR queue depth OR length OR limit". Cisco's documentation is sexy. You might also look up some generic network queueing/buffering wikipedia articles to see what situations call for buffers.
Really, CoDel should be safe to enable on any traffic type (except UDP?) but maybe there is certain traffic that you dislike and want to force an oversized buffer to discourage it rather than block it?
VOIP, for example, is usually very precise with the bandwidth it needs so you can precisely allocate that amount of bandwidth. In that case, VOIP probably would not benefit from CoDel.
-
Most protocols that use UDP are more sensitive to latency than loss. There are some TCP-like usages of UDP that responds to loss. There is very little reason to not use Codel. I pretty much only don't use Codel for situations where there is always going to be enough bandwidth in the queue and I don't want the "overhead" of Codel. My ACK queue has 20% bandwidth, which is complete overkill on my symmetrical connection. ACKs tend to only consume about 1/30th of your ingress, which can be an issue for asymetric connections.
One thing of note. Even ACK handle loss better than latency. If you have an asymmetric connection where you don't want to give ACKs too much bandwidth, you can use Codel or a smaller queue. This mostly applies to bulk transfers. Video games that use TCP may not like ACK loss.
-
Thank you for the replies and explanation.
So if I'm understanding you correctly, I believe I've set everything up correctly for my situation. I've used the wizard as a starting point, allowing it to set the service curves. I've then tweaked the following settings –>
- Set upload and download to 95% of measured max
- disabled Explicit Congestion Notification for all queue's
- enabled codel for all queue's, with the exception of VoIP and Ack (both inbound & outbound)
- set the queue limit for all queue's to 1024
Does this make sense?
Thanks again!
~Spritz
-
For those interested, in the end I ended up with a fairly simple configuration.
WAN Scheduler Type HFSC, Bandwidth 22.5Mb qAck Priority 6, Queue Limit 1000, Bandwith 20% qInternet Bandwith 80% qDefault Priority 2, Queue Limit 1000, Default Queue, ECN, Codel, Bandwith 75% qHigh Priority 4, Queue Limit 1000, ECN, Codel, Bandwith 10%, Link Share m2 10% qLow Priority 1, Queue Limit 1000, ECN, Codel, Bandwith 5%, Link Share m2 5% LAN Scheduler Type HFSC, Bandwidth 115Mb qAck Priority 6, Queue Limit 1000, Bandwith 20% qInternet Bandwith 80% qDefault Priority 2, Queue Limit 1000, Default Queue, ECN, Codel, Bandwith 75% qHigh Priority 4, Queue Limit 1000, ECN, Codel, Bandwith 10%, Link Share m2 10% qLow Priority 1, Queue Limit 1000, ECN, Codel, Bandwith 5%, Link Share m2 5%
The bandwidth values for WAN and LAN are between 90% and 95% of peak available. I also have DMZ and GUEST which are configured identical to LAN except that they have smaller bandwidth specifications. I decided to live with a bit of conflict between LAN, DMZ and GUEST rather than trying to stand on my head and spin like a top. :)
Thank you all for your help. Additional comments or suggestions are welcomed.
-
For those interested, in the end I ended up with a fairly simple configuration.
WAN Scheduler Type HFSC, Bandwidth 22.5Mb qAck Priority 6, Queue Limit 1000, Bandwith 20% qInternet Bandwith 80% qDefault Priority 2, Queue Limit 1000, Default Queue, ECN, Codel, Bandwith 75% qHigh Priority 4, Queue Limit 1000, ECN, Codel, Bandwith 10%, Link Share m2 10% qLow Priority 1, Queue Limit 1000, ECN, Codel, Bandwith 5%, Link Share m2 5% LAN Scheduler Type HFSC, Bandwidth 115Mb qAck Priority 6, Queue Limit 1000, Bandwith 20% qInternet Bandwith 80% qDefault Priority 2, Queue Limit 1000, Default Queue, ECN, Codel, Bandwith 75% qHigh Priority 4, Queue Limit 1000, ECN, Codel, Bandwith 10%, Link Share m2 10% qLow Priority 1, Queue Limit 1000, ECN, Codel, Bandwith 5%, Link Share m2 5%
The bandwidth values for WAN and LAN are between 90% and 95% of peak available. I also have DMZ and GUEST which are configured identical to LAN except that they have smaller bandwidth specifications. I decided to live with a bit of conflict between LAN, DMZ and GUEST rather than trying to stand on my head and spin like a top. :)
Thank you all for your help. Additional comments or suggestions are welcomed.
In theory, you can move the bandwidth limiter to the qInternet level, and have a qLink available, so inter-vlan-guest-dmz communications can be done at full speed
-
moscato359 makes a good point. In the case of multi-LAN, separating intra-LAN traffic from Intranet can be useful. If you go this route, I would recommend placing your ACK queue under qInternet.
More of a philosophical reason, but my default queue is qLow. I have a lot of normal traffic that is not low, but I don't care enough to add a rule. Because of this, I set my qLow pretty high, like 20% bandwidth. The reason for this is most traffic that is a "bandwidth hog" is also incredibly difficult if not impossible to classify.
-
In theory, you can move the bandwidth limiter to the qInternet level, and have a qLink available, so inter-vlan-guest-dmz communications can be done at full speed
Yes, the wizard does this. To make it work properly requires more firewall rules. In my case, it wasn't necessary because there is very little traffic between LAN and DMZ, and none at all between GUEST and LAN or DMZ.
-
Truth be told, there are actually two other interfaces that I didn't bother to mention, nor did I bother with shaping on them. Combined they average about 1Kb. :)
-
I have a question that you may be able to answer quickly, but if not I'd be glad to open a new post. I have my traffic shaping rules set up very much like yours, but I have asymmetric upload and download speeds, which caused me to question whether I'm matching on the correct interfaces and/or directions. As an example, I have a floating rule set to match inbound UDP traffic on the LAN interface whose destination is port 53 and assign it to a higher priority queue to prioritize DNS traffic. However, I think I recall that traffic gets assigned to the queue on which the match was made, so in this case I believe that I am erroneously assigning outbound DNS queries to a queue on my LAN interface (i.e. the download queue). I'm wondering if I should instead change this rule to match outbound on the WAN interface? If this isn't clear I'd be glad to provide more details. Or, if it's not too much trouble, could someone describe how to set up the match (on which interface(s) and direction(s)) in order to successfully assign download traffic to a queue on the LAN interface and upload traffic to a queue on the WAN interface? I think that's all I really need. Thanks in advance for any assistance.
-
I don't believe you need to have interface specific rules, just floating rules. See attached.

 -
Thanks for the quick response. I believe you're correct, and that this has to do with the concept of "flows", with which I am only loosely familiar. But I believe the idea is that if, for example, outbound traffic on the LAN interface to TCP port 22 is matched to identify SSH connections and assign them to a specific queue, then any traffic subsequently associated with that flow will be assigned to the queue of the same name (if one exists) on the interface through which the traffic transits in an outbound direction. That last bit about the outbound direction is my critical assumption that I'd like to confirm though. But if the queues on the WAN interface govern upload throughput and the queues on the LAN interface govern download throughput, then it logically follows they only apply to traffic that is outbound from their respective interfaces.
-
Mapping or classifying a "flow" is a function of the firewall rules. For simplicity, think of it as "classifying a flow as qHigh," rather than "mapping a flow to a queue qHigh on an interface."
-
Thanks, that does make sense to me I believe. I suppose my lingering confusion then has to do with exactly when and how packets within a flow classified as qHigh, for example, are actually placed into the qHigh queue on a specific interface. Suppose a simple single-LAN single-WAN setup with a queue named qHigh on both the WAN and LAN interfaces. Is my understanding correct that within a flow classified as qHigh, that packets headed toward the local network (download, out direction on LAN interface) would be placed in qHigh on the LAN interface and packets headed toward the Internet (upload, out direction on WAN interface) would be placed in qHigh on the WAN interface?
To put it another way, I have a 50/5 Internet connection. So is it a correct statement that if I have queues on my LAN interface that are cumulatively constrained to 50Mbps and queues on my WAN interface that are cumulatively constrained to 5Mbps, there is no way that I could "accidentally" queue upload traffic in a LAN queue or download traffic in a WAN queue? I suspect I wold notice if I were doing this, but my concern is clearly that I don't want to inadvertently bottleneck any download traffic to 5Mbps.
Thanks for bearing with me and I apologize if the wording is awkward.
-
Thanks, that does make sense to me I believe. I suppose my lingering confusion then has to do with exactly when and how packets within a flow classified as qHigh, for example, are actually placed into the qHigh queue on a specific interface. Suppose a simple single-LAN single-WAN setup with a queue named qHigh on both the WAN and LAN interfaces. Is my understanding correct that within a flow classified as qHigh, that packets headed toward the local network (download, out direction on LAN interface) would be placed in qHigh on the LAN interface and packets headed toward the Internet (upload, out direction on WAN interface) would be placed in qHigh on the WAN interface?
To put it another way, I have a 50/5 Internet connection. So is it a correct statement that if I have queues on my LAN interface that are cumulatively constrained to 50Mbps and queues on my WAN interface that are cumulatively constrained to 5Mbps, there is no way that I could "accidentally" queue upload traffic in a LAN queue or download traffic in a WAN queue? I suspect I wold notice if I were doing this, but my concern is clearly that I don't want to inadvertently bottleneck any download traffic to 5Mbps.
Thanks for bearing with me and I apologize if the wording is awkward.
AFAIK, yeah, if packets of a certain state leave on qBlah (WAN) they will return on qBlah (LAN). The queues only apply to traffic leaving an interface so upload traffic cannot be constrained by a LAN queue, since it is only receiving traffic.
I think limiters can be bidirectional on a single interface, both limiting what leaves & enters.
Yeah… you've made me a little unsure about how it all works... Maybe I just haven't had enough coffee. :)
-
That's usually the state I find myself in when I really settle in to try to think through this stuff :) I convince myself that I finally understand it, and then realize some nuance like this that really shatters my confidence. However, everything you said confirms the way that I believe it works as well (including the bi-directional nature of limiters). So I'm going to run with that unless and until proven wrong. Thanks again to everyone who's weighed in; I'm consistently impressed with the quality of discussion here.
-
I think the bandwidth limit is solely determined by the interface the packet will leave from. So in my case, packets from the local network destined to the internet are controlled by the bandwidth limit of the scheduler on WAN (22.5Mb), and packets from the internet destined to the local network are governed by the bandwidth limit of the scheduler on LAN (115Mb).