FAIRQ Scheduler Number of Buckets
-
Not sure, but you could try 1024 and see how it works. FYI, the chance of two or more flows being assigned to the same bucket is the square-root of the number of buckets. eg If you had 1024 buckets and 32 flows, there's a 50% chance that two or more of those flows are in the same bucket.
Cake should fix this whenever it reaches RTM. In short, there is a strong practical statistical small limit on the number of colliding hashes in a non-bloated buffer. Cake can get ~0% collisions with only 256 buckets no matter the number of flows.
-
I think the default is applied by pfSense, not some FAIRQ hardwired default, so you could setup a FAIRQ queue with the default and run some "pfctl" command-line command to see what the default bucket size is.
Though, aside from general curiosity, I wouldn't worry too much about the default bucket number, unless you know it's what's causing problems. Or… if you have excess memory, why not have more buckets? ;)
The best documentation I found for FAIRQ (which is sparse, perhaps purposely) was in DragonflyBSD's source-code for FAIRQ. DragonflyBSD created the FAIRQ we inherited.
-
Hi guys - thanks a bunch for the responses.
My pfSense box actually has 16GB of ram so I tried both 1024 and 2048 for the bucket number on each of my interfaces. It's currently still set to 2048 and everything is working fine. Now that being said, I really haven't tried to open a few thousand connections to see how things would perform. Is there an easy way to benchmark that?
Harvy66 - could you please explain to me again how you came up with 50% number? Maybe I missed something but the square root of 1024 is 32, and then how did you get to 50%?
Nullity - I did try using the pfctl command yesterday as well, specifically: pfctl -v -s queue, but unfortunately that did not tell me anything about the default number of buckets. Did I perhaps issue the wrong command?
Also, one other thought I had (based on reading the earlier thread of fq_codel vs. FAIRQ & Codel): If in this implementation the active hash buckets are put together onto a codel active queue, doesn't that then prevent FAIRQ from servicing them round robin (as they are sitting on a FIFO or LIFO structure)? In that case, wouldn't we expect there to be a codel active queue for each of the hash buckets, which FAIRQ could then dequeue in round robin fashion? Maybe I'm misunderstanding, but would curious what you guys thought.
Thanks again for all your help, I really appreciate it.
-
So looking at the source for a while which Nullity reference here:
https://forum.pfsense.org/index.php?topic=105563.msg590249#msg590249
I have to say that I'm not 100% sure, but it seems to me that the queues are done per bucket, judging by line 74 and 611 on forward altq_fairq.c. It also continues to refer to a circular list of buckets in the comments vs. a queue of buckets. This makes me wonder now, how fq_codel is different then from FAIRQ + Codel?
The one thing I think was able to figure out - it looks like the default number of buckets is 256 and the max is 2048. Would you guys agree?
Thanks again.
-
Harvy66 - could you please explain to me again how you came up with 50% number? Maybe I missed something but the square root of 1024 is 32, and then how did you get to 50%?
This is just the math behind it. https://en.wikipedia.org/wiki/Birthday_attack
-
So looking at the source for a while which Nullity reference here:
https://forum.pfsense.org/index.php?topic=105563.msg590249#msg590249
I have to say that I'm not 100% sure, but it seems to me that the queues are done per bucket, judging by line 74 and 611 on forward altq_fairq.c. It also continues to refer to a circular list of buckets in the comments vs. a queue of buckets. This makes me wonder now, how fq_codel is different then from FAIRQ + Codel?
The one thing I think was able to figure out - it looks like the default number of buckets is 256 and the max is 2048. Would you guys agree?
Thanks again.
FAIRQ + Codel is going to have two sets of buffers, fq_codel has one set. fq_codel also does DRR scheduling for which queue to dequeue next.
-
So looking at the source for a while which Nullity reference here:
https://forum.pfsense.org/index.php?topic=105563.msg590249#msg590249
I have to say that I'm not 100% sure, but it seems to me that the queues are done per bucket, judging by line 74 and 611 on forward altq_fairq.c. It also continues to refer to a circular list of buckets in the comments vs. a queue of buckets. This makes me wonder now, how fq_codel is different then from FAIRQ + Codel?
The one thing I think was able to figure out - it looks like the default number of buckets is 256 and the max is 2048. Would you guys agree?
Thanks again.
FAIRQ + Codel is going to have two sets of buffers, fq_codel has one set. fq_codel also does DRR scheduling for which queue to dequeue next.
Thanks Harvy66 - can you explain to me a bit more why FAIR + Codel would have two buffers and fq_codel has just one? Is it because one can still setup multiple child queues under FAIRQ inside pfSense which adds another buffer besides the one that keeps track of the number of hash buckets (where each bucket's flow packet queue is managed by Codel)? If that is the case, would want one expect performance to be similar if there is only one child queue created for FAIRQ? Or is there another reason altogether for the two different sets of buffers?
Thanks in advance for the clarification.
-
AFAIK, (our) CoDel simply controls (FAIRQ's) global buffer size, it doesn't add another buffer. Just like when you manually set a queue's maximum depth, except codel dynamically and intelligently controls it.
CoDel could also be applied to each of FAIRQ's per-flow pseudo-queues… but I dunno if it does. I think it's unlikely.
It shouldn't be too hard for some programmer to find proof of these hypotheses in the source-code and present it to us non-programmers.
Who knows? ermal would know, but he's no longer around. :(
-
AFAIK, (our) CoDel simply controls (FAIRQ's) global buffer size, it doesn't add another buffer. Just like when you manually set a queue's maximum depth, except codel dynamically and intelligently controls it.
CoDel could also be applied to each of FAIRQ's per-flow pseudo-queues… but I dunno if it does. I think it's unlikely.
It shouldn't be too hard for some programmer to find proof of these hypotheses in the source-code and present it to us non-programmers.
Who knows? ermal would know, but he's no longer around. :(
Thanks Nullity. If it's just one giant queue/buffer of buckets controlled by Codel wouldn't the fairness of FAIRQ break down? In other words, how would the FAIRQ algorithm be able to go round robin and dequeue one packet at a time from each flow? It seems that if Codel controlled the size of the queue/buffer that contained the buckets (vs. controlling the flow queues), it would no longer be fair as some buckets at the end may be dropped (i.e. the algorithm would never get to them). I could be completely wrong though. I looked a little through the source code (findings further up in this thread), and it seemed like that the queue management algorithm chosen (Red, Codel, etc.) is applied per bucket.
Thanks again for all your help and explanation guys, I really appreciate it.
-
AFAIK, (our) CoDel simply controls (FAIRQ's) global buffer size, it doesn't add another buffer. Just like when you manually set a queue's maximum depth, except codel dynamically and intelligently controls it.
CoDel could also be applied to each of FAIRQ's per-flow pseudo-queues… but I dunno if it does. I think it's unlikely.
It shouldn't be too hard for some programmer to find proof of these hypotheses in the source-code and present it to us non-programmers.
Who knows? ermal would know, but he's no longer around. :(
Thanks Nullity. If it's just one giant queue/buffer of buckets controlled by Codel wouldn't the fairness of FAIRQ break down? In other words, how would the FAIRQ algorithm be able to go round robin and dequeue one packet at a time from each flow? It seems that if Codel controlled the size of the queue/buffer that contained the buckets (vs. controlling the flow queues), it would no longer be fair as some buckets at the end may be dropped (i.e. the algorithm would never get to them). I could be completely wrong though. I looked a little through the source code (findings further up in this thread), and it seemed like that the queue management algorithm chosen (Red, Codel, etc.) is applied per bucket.
Thanks again for all your help and explanation guys, I really appreciate it.
FAIRQ controls the ordering of queued packets, nothing more.
CoDel controls queue depth, nothing more.
That's how I understand it…