Can someone explain FAIRQ vs Codel?
-
I have been using codel for some time … It works great on my 30/5 cable connection. I would like to use it on a slower 6/0.5 connection, but have been reading it doesn't work well at low speeds.
Since pfsense does not support fq_codel (which apparently solves the slower speed problem), is FAIRQ an alternative to Codel? Can anyone explain how it works?
-
edit: See other post, I have no idea
Fairq is not an option in PFSense, but the idea is to break up network flows into hash-bucket to separate latency domains among these buckets. There is little information to be found about a naked implementation of fairq, but several AQMs include fairq in addition to their core logic.
Codel is an AQM that aims for a target latency. If a packet spends too much time in the queue, codel will increase drop rate until the latency comes back down. The reason codel does not play well with low bandwidth connections is that large packets can take longer than the default target latency, which means codel is constantly attempting to drop packets.
-
I see FAIRQ as an option? See attached?

 -
Ohhh, I was looking under queue types, not scheduler type. Yep, I have no idea then.
-
FAIRQ is an implementation of SFQ (Stochastic Fair Queueing). Fair Queueing has no technical definition, but the different implementations simply try to keep the worst-case delay/bandwidth more proportional as traffic congestion increases.
FAIRQ gives each connection a hash, which, given the nature of the way the hash is calculated, can occasionally be the same as another connection, but this a rare occurence. This occasional hash collision is important because it defines FAIRQ/SFQ as not being perfectly fair.
Now, with all the hashes, FAIRQ will iterate through them in a round-robin fashion. SFQ is interesting because it achieves good fairness while also being less processor intensive than other Fair Queueing algorithms.FAIRQ also has a throughput "hog" parameter which modifies it ways I am unaware of. Perhaps a controllable "weight" change like in Weighted Fair Queueing?
If you want more details you can find the white-paper on Stochastic Fair Queueing (SFQ) and also browse the source code for FAIRQ, which is originally from DragonflyBSD.
The "fq" part of fq_codel is practically SFQ, but with a slight modification for "flow fairness".
-
FAIRQ is an implementation of SFQ (Stochastic Fair Queueing). Fair Queueing has no technical definition, but the different implementations simply try to keep the worst-case delay/bandwidth more proportional as traffic congestion increases.
FAIRQ gives each connection a hash, which, given the nature of the way the hash is calculated, can occasionally be the same as another connection, but this a rare occurence. This occasional hash collision is important because it defines FAIRQ/SFQ as not being perfectly fair.
Now, with all the hashes, FAIRQ will iterate through them in a round-robin fashion. SFQ is interesting because it achieves good fairness while also being less processor intensive than other Fair Queueing algorithms.FAIRQ also has a throughput "hog" parameter which modifies it ways I am unaware of. Perhaps a controllable "weight" change like in Weighted Fair Queueing?
If you want more details you can find the white-paper on Stochastic Fair Queueing (SFQ) and also browse the source code for FAIRQ, which is originally from DragonflyBSD.
The "fq" part of fq_codel is practically SFQ, but with a slight modification for "flow fairness".
Thanks for this.
-
http://en.wikipedia.org/wiki/Fair_queuing
That link undoubtedly explains Fair Queueing better than I could. Since "fairness" is kinda undefined, I have trouble explaining it.
SFQ is a novel algorithm because it is (or can be, depending on available resources) practically as fair as other methods, but requires less processing power.
With some preparation, by setting queue size properly, I believe FAIRQ is a better choice than standard CoDel, if you are dealing with multiple parallel streams of differing traffic types.
Of course, all these configurations need to be proven or disproven before we make any decisions. Granted… testing ain't easy.
-
Think of "fair" as being all flows having equal priority. No one flow can consume more than another. For practicality reasons, scheduling the queue per firewall state is not fast or simple. Many implementations use hash-buckets as a trade off. Hashing the flow(dest ip+port, src ip+port), allows a random distribution of flows across many separate "buckets". These buckets take the place of states. Now you have N number of buckets and flows are randomly assigned to buckets based on their flow info, but also going into the same bucket. Hashing also allows the algorithm to be stateless relative to the flows.
Any two flows have a chance of being assigned to the same bucket, but that chance can be relatively low if you have a lot of buckets. Given a large number of flows, you're going to have some overlap of several flows sharing the same bucket. The good news is most flows are relatively low bandwidth, so your chance of getting stuck with a high bandwidth flow can be relatively low.
Many of enhanced the notion of "fair" by saying buckets that were empty get higher priority than buckets that are not empty. This means that a low bandwidth flow that quickly gets forwarded can have high priority than flows that are consuming large amounts of bandwidth, but only as long as that low bandwidth flow stays low bandwidth.
I'm not saying that this is how fairq does it, but many implementations of "fair queues" use these concepts in some variation.
You can think of a "non fair" queue as a queue with one bucket. All flows get stuck in the same bucket where they can all directly affect each other.
-
A major discrepancy of Fair Queueing is per-byte fairness vs per-packet fairness. According to DragonflyBSD's FAIRQ source-code, FAIRQ is per-byte fair… I think. Per-packet can be deceptively unfair, depending on expectations.