Pseudo fair queuing with HFSC
-
I just had a weekend project idea. I'm going to use the floating rule match of matching against the destination network, then use 255.255.255.(248-255)/29 to break up destination IP addresses into 8 different child queues with even ratios.
Since I'll be distributing among the queues based on a netmask, it won't be random, but my hope is there are not many biases in the IPs that I connect to.
HFSC is limited to 16 queues from what I've read, so 8 is the max for a single queue since I want to break things up by bits. I only plan on doing this for my "normal" traffic, which has few rules, since this will require duplication of rules.
-
What are your expected results and how do they differ from a single default queue?
HFSC is already "fair", a term which each Fair Queueing algorithm specifically defines. HFSC focuses on delay.
-
With a single queue, all traffic shares the same queue. With multiple queues, you can split up the traffic among the queues, reducing the number of flows per queue.
That's pretty much all fair queuing does in the first place, just create a bunch of queues and uses a hashing algorithm to distribute the flows among the queues. In this case, it would be the net mask, which is not random, so distribution won't be as good and will probably have some biases and more pathological cases.
-
Eight queues with the same priority are equal to single queue.
HFSC does not randomly distribute bandwidth, it is based on a Shortest Deadline algorithm. Random distribution would be unpredictabe which is counter to HFSC's primary function.
-
I never said anything about randomly distributing anything about HFSC, I just said the ideal would be flows randomly distributed to queues.
Eight queues with the same priority would not be the same as one queue. HFSC would need to interleave the queues, which is the whole point. Fair queuing is exactly that. You take the traffic, distribute it among many queues, then make sure each queue gets and equal amount of bandwidth. By having multiple queues, you increase the surface area for dequeuing. Instead of one head of queue, you have N heads of queue.
-
HFSC is already a Fair Queueing algorithm.
Hierarchical Fair Service Curve
-
Yes, fair among queues but not fair within a queue. The point is to distribute traffic over many queues that are all identical. HFSC isolates each queue from the others, limiting damage for greedy flows. Coupled with CoDel, it's cause CoDel to become more aggressive if a flow attempts to monopolize too much bandwidth.
-
Yes, fair among queues but not fair within a queue. The point is to distribute traffic over many queues that are all identical. HFSC isolates each queue from the others, limiting damage for greedy flows. Coupled with CoDel, it's cause CoDel to become more aggressive if a flow attempts to monopolize too much bandwidth.
Ah, CoDel. Yeah, your idea works if the focus is to seperate flows for CoDel, since CoDel is applied per queue (I think). You did not mention that initially.
Report back if this actually makes any measurable difference.
Dave Taht mentioned (in the bounty thread) that fq_CoDel was ~300 lines of code, which is a bit encouraging, but the $60k bounty confounds me, lol. The current pfSense CoDel still needs a couple of tweaks (target/interval), which may improve your situation in the short-term.HFSC is fair within queues though, not inter-queue (unless queues are same priority, as I mentioned earlier). Usually, queues are different priorities, so fairness among them is literally impossible. The burden of proof is on you. Please cite something (from the HFSC paper) to prove your assumptions of HFSC's lack of intra-queue fairness, otherwise…
Edit: I think both of us are thinking of "fair" as something different than HFSC defines it. This is an area where I had to read not only the HFSC paper, but some cited papers. Unless you fully understand HFSC and it's definition of "fair", be careful trying to change/hack it. IIRC, SCED (Service Curve Earliest Deadline) was the fairness algo used, being that HFSC focused on worst-case delay fairness.
-
Please cite something (from the HFSC paper) to prove your assumptions of HFSC's lack of intra-queue fairness
HFSC does not do intra-queue anything, that is the role of the queue.
Usually, queues are different priorities, so fairness among them is literally impossible
Depends on your definition of fair. HFSC provides minimum guarantees, which is a kind of fairness.
In other news, I was able to fully load my upload recently.
Setup
qHigh 40%
qNormal 40%
qDefault 20%Idle Pings
qHigh ISP ping 1ms
qNormal Internet ping 10msLoad
qHigh 0
qNormal 64Mb/s-ish
qDefault 32Mb/s-ishLoaded pings
qHigh ISP ping 0.2ms
qNormal Internet ping 11msqHigh was unaffected. qNormal saw a 1ms increase, but it was at full load. That was avg ping, jitter was increase for qNormal, which is expected. It still seems like HFSC manages to completely isolate queues with worst case latency set by your bandwidth allocations.
-
Please cite something (from the HFSC paper) to prove your assumptions of HFSC's lack of intra-queue fairness
HFSC does not do intra-queue anything, that is the role of the queue.
I thought HFSC was the queue…
Edit: If your assumptions are correct, in a single queue setup with HFSC, HFSC is doing nothing, since all flows are intra-queue. Can you please post something (anything) to backup your claims?
-
HFSC is a scheduler.
-
The hierarchical fair-service curve (HFSC) is a network scheduling algorithm for a network scheduler
http://en.wikipedia.org/wiki/Hierarchical_fair-service_curve
The scheduler is what decides which queues to process and in what order.
http://www.openbsd.org/faq/pf/queueing.html
Queuing and scheduling are logically two orthogonal issues.
-
This is with 32/16 streams
-
@Harvy66, can you please post a screenshot(s) of the QoS settings you used for that last screenshot? Thanks
-
edit: I updated interfaces to include the scheduler
Here's the floating rules I use. This is a reduced list that does not include all of the game ports, etc. I just took screenshots and pasted them together for the main rules that I use.
Both LAN and WAN are identical, but I do have a symmetrical connection.
-
The hierarchical fair-service curve (HFSC) is a network scheduling algorithm for a network scheduler
http://en.wikipedia.org/wiki/Hierarchical_fair-service_curve
The scheduler is what decides which queues to process and in what order.
http://www.openbsd.org/faq/pf/queueing.html
Queuing and scheduling are logically two orthogonal issues.
HFSC added the ability to "decouple delay and bandwidth" to an earlier algorithm.
This algorithm was called Hierarchical Packet Fair Queueing (HPFQ).
Here is a quote from the HFSC-dedicated web-page of one of HFSC's authors.
We have developed an idealized model that is the first to simultaneously capture the requirements of the three important services in an integrated services computer network: guaranteed real-time, adaptive best-effort, and hierarchical link-sharing services. We then designed two hierarchical scheduling algorithms, Hierarchical Packet Fair Queueing (H-PFQ) and Hierarchical Fair Service Curve (H-FSC). H-PFQ is the the first algorithm to simultaneously support all three of real-time, adaptive best-effort, and link-sharing services. H-FSC allows a more flexible resource management than H-PFQ by decoupling the delay and bandwidth allocation. From a conceptual point of view, H-FSC is the first algorithm that goes beyond Fair Queueing and still satisfies all important requirements in integrated services networks. To find out more about and H-PFQ and HFSC's technical advantages, click here.
HFSC is a Fair Queueing algorithm, or to simplify, HFSC is a queueing algorithm. The queueing algorithm schedules the queue. This is fundamental stuff…
The whole premise for this thread is redundant and useless at best, except for the CoDel idea.
-
If HFSC did queuing, there would be no point in indicating they type of queue you want(CoDel, Red, etc). If HFSC did fair queuing within a queue, there would be no point of fq_Codel. HFSC does not do queuing in the normal usage of the term "queue". HFSC is not an AQM.
AQMs and schedulers serve two completely different purposes. One handles scheduling the dequeing of queues and the other handles how buffering occurs within the queue. Some implementations, like cake, combine the two together, like how ZFS combines volume management with the file system. Not to say cake gains as much awesomeness as ZFS did by combining abstractions.
Because they are two different areas of research, they have slightly different usages for their terms, even if they use the same words with similar meaning. "Queuing" in scheduling is different than "queuing" queue management.
"decouple delay and bandwidth" is worthless if you have 500ms of packets in backlog in your queue. HFSC does nothing about managing the bufferbloat issue, you need an AQM, but HFSC does an excellent job making sure your queues get their minimum allocated bandwidth and allows "decoupling" packet delays from available bandwidth.
No longer does your 1Mb/s queue take 12ms to send a 1500 byte packet, you can make it 1.2ms by giving it a 10Mb service curve with an appropriate "d", while still keeping it limited to 1Mb/s, but that does nothing to help bufferbloat. If you have standing queue of 100 1500 byte packets, even if the head packet takes only 1.2ms to send because of the service curve, the average delay is still going to be about 1200ms, dwarfing any benefit of your service curve.
Another way to think about it.
Queue managers attempt to reach the coveted ideal 1 packet standing queue, keeping queue time to the absolute minimum while maximizing the queue's throughput. The scheduler manages when the head packet gets sent from a specific queue.
-
RED and Codel (AQMs) dynamically control the size of the queue, not the scheduling. HFSC is the scheduler. fq_codel is the combination of CoDel (AQM) and Fair/Flow Queueing (scheduler).
In Linux, the damn module name is even called sch_hfsc.ko because it is a scheduling algorithm.
Yes, AQMs have two different purposes. AQMs control queue size. A scheduler, well… schedules. To be clear, individual flows are FIFO, but the interleaving of flows is what is scheduled by HFSC.
Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. http://www.cs.cmu.edu/~hzhang/HFSC/TALK/sld027.htm Check some of of the earlier and later slides for more info.
-
RED and Codel (AQMs) dynamically control the size of the queue, not the scheduling. HFSC is the scheduler. fq_codel is the combination of CoDel (AQM) and Fair/Flow Queueing (scheduler).
In Linux, the damn module name is even called sch_hfsc.ko because it is a scheduling algorithm.
Yes, AQMs have two different purposes. AQMs control queue size. A scheduler, well… schedules. To be clear, individual flows are FIFO, but the interleaving of flows is what is scheduled by HFSC.
Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. http://www.cs.cmu.edu/~hzhang/HFSC/TALK/sld027.htm Check some of of the earlier and later slides for more info.
I was saying HFSC is a scheduler, that's what I've been saying this entire time, you're the one saying it is not. You keep claiming it's a hybrid sch/AQM. fq_CoDel does not do scheduling in the same sense that scheduler does. A scheduler determines the rate at which a queue is dequeued, fq_codel does nothing of the sort. It just determines which packet is head and when to drop packets based on internal hashbuckets of sub-queues. fq_codel has no idea the next time it will be dequeued.
You're conflating similarly named terms between two different problem domains that use similar approaches but at different abstract layers. The nuances are very important. An example of conflating terms is internally fq_codel uses "queues", but fq_codel itself is a queue. The fact that it internally uses the computer science definition of queues as datastructures is irreverent to the fact that it is a logical network "queue". The term "queue" in this case is being use in two completely different ways.
HFSC manages inter-queue and AQMs manage intra-queue. Cake is an example of a hybrid scheduler and AQM.
"Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. "
That link is not showing an interleaving of flows, but an interleaving of queues. If HFSC interleaved flows, we wouldn't have issues with bufferbloat. -
RED and Codel (AQMs) dynamically control the size of the queue, not the scheduling. HFSC is the scheduler. fq_codel is the combination of CoDel (AQM) and Fair/Flow Queueing (scheduler).
In Linux, the damn module name is even called sch_hfsc.ko because it is a scheduling algorithm.
Yes, AQMs have two different purposes. AQMs control queue size. A scheduler, well… schedules. To be clear, individual flows are FIFO, but the interleaving of flows is what is scheduled by HFSC.
Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. http://www.cs.cmu.edu/~hzhang/HFSC/TALK/sld027.htm Check some of of the earlier and later slides for more info.
I was saying HFSC is a scheduler, that's what I've been saying this entire time, you're the one saying it is not. You keep claiming it's a hybrid sch/AQM. fq_CoDel does not do scheduling in the same sense that scheduler does. A scheduler determines the rate at which a queue is dequeued, fq_codel does nothing of the sort. It just determines which packet is head and when to drop packets based on internal hashbuckets of sub-queues. fq_codel has no idea the next time it will be dequeued.
You're conflating similarly named terms between two different problem domains that use similar approaches but at different abstract layers. The nuances are very important. An example of conflating terms is internally fq_codel uses "queues", but fq_codel itself is a queue. The fact that it internally uses the computer science definition of queues as datastructures is irreverent to the fact that it is a logical network "queue". The term "queue" in this case is being use in two completely different ways.
HFSC manages inter-queue and AQMs manage intra-queue. Cake is an example of a hybrid scheduler and AQM.
I have been saying HFSC schedules both inter-queue and intra-queue. If HFSC does no Fair Queueing intra-queue then any flow could saturate a queue.
"Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. "
That link is not showing an interleaving of flows, but an interleaving of queues. If HFSC interleaved flows, we wouldn't have issues with bufferbloat.
I used the wrong example there. My mistake.
but anyway, I give up.