Pseudo fair queuing with HFSC
-
Yes, fair among queues but not fair within a queue. The point is to distribute traffic over many queues that are all identical. HFSC isolates each queue from the others, limiting damage for greedy flows. Coupled with CoDel, it's cause CoDel to become more aggressive if a flow attempts to monopolize too much bandwidth.
Ah, CoDel. Yeah, your idea works if the focus is to seperate flows for CoDel, since CoDel is applied per queue (I think). You did not mention that initially.
Report back if this actually makes any measurable difference.
Dave Taht mentioned (in the bounty thread) that fq_CoDel was ~300 lines of code, which is a bit encouraging, but the $60k bounty confounds me, lol. The current pfSense CoDel still needs a couple of tweaks (target/interval), which may improve your situation in the short-term.HFSC is fair within queues though, not inter-queue (unless queues are same priority, as I mentioned earlier). Usually, queues are different priorities, so fairness among them is literally impossible. The burden of proof is on you. Please cite something (from the HFSC paper) to prove your assumptions of HFSC's lack of intra-queue fairness, otherwise…
Edit: I think both of us are thinking of "fair" as something different than HFSC defines it. This is an area where I had to read not only the HFSC paper, but some cited papers. Unless you fully understand HFSC and it's definition of "fair", be careful trying to change/hack it. IIRC, SCED (Service Curve Earliest Deadline) was the fairness algo used, being that HFSC focused on worst-case delay fairness.
-
Please cite something (from the HFSC paper) to prove your assumptions of HFSC's lack of intra-queue fairness
HFSC does not do intra-queue anything, that is the role of the queue.
Usually, queues are different priorities, so fairness among them is literally impossible
Depends on your definition of fair. HFSC provides minimum guarantees, which is a kind of fairness.
In other news, I was able to fully load my upload recently.
Setup
qHigh 40%
qNormal 40%
qDefault 20%Idle Pings
qHigh ISP ping 1ms
qNormal Internet ping 10msLoad
qHigh 0
qNormal 64Mb/s-ish
qDefault 32Mb/s-ishLoaded pings
qHigh ISP ping 0.2ms
qNormal Internet ping 11msqHigh was unaffected. qNormal saw a 1ms increase, but it was at full load. That was avg ping, jitter was increase for qNormal, which is expected. It still seems like HFSC manages to completely isolate queues with worst case latency set by your bandwidth allocations.
-
Please cite something (from the HFSC paper) to prove your assumptions of HFSC's lack of intra-queue fairness
HFSC does not do intra-queue anything, that is the role of the queue.
I thought HFSC was the queue…
Edit: If your assumptions are correct, in a single queue setup with HFSC, HFSC is doing nothing, since all flows are intra-queue. Can you please post something (anything) to backup your claims?
-
HFSC is a scheduler.
-
The hierarchical fair-service curve (HFSC) is a network scheduling algorithm for a network scheduler
http://en.wikipedia.org/wiki/Hierarchical_fair-service_curve
The scheduler is what decides which queues to process and in what order.
http://www.openbsd.org/faq/pf/queueing.html
Queuing and scheduling are logically two orthogonal issues.
-
This is with 32/16 streams
-
@Harvy66, can you please post a screenshot(s) of the QoS settings you used for that last screenshot? Thanks
-
edit: I updated interfaces to include the scheduler
Here's the floating rules I use. This is a reduced list that does not include all of the game ports, etc. I just took screenshots and pasted them together for the main rules that I use.
Both LAN and WAN are identical, but I do have a symmetrical connection.
-
The hierarchical fair-service curve (HFSC) is a network scheduling algorithm for a network scheduler
http://en.wikipedia.org/wiki/Hierarchical_fair-service_curve
The scheduler is what decides which queues to process and in what order.
http://www.openbsd.org/faq/pf/queueing.html
Queuing and scheduling are logically two orthogonal issues.
HFSC added the ability to "decouple delay and bandwidth" to an earlier algorithm.
This algorithm was called Hierarchical Packet Fair Queueing (HPFQ).
Here is a quote from the HFSC-dedicated web-page of one of HFSC's authors.
We have developed an idealized model that is the first to simultaneously capture the requirements of the three important services in an integrated services computer network: guaranteed real-time, adaptive best-effort, and hierarchical link-sharing services. We then designed two hierarchical scheduling algorithms, Hierarchical Packet Fair Queueing (H-PFQ) and Hierarchical Fair Service Curve (H-FSC). H-PFQ is the the first algorithm to simultaneously support all three of real-time, adaptive best-effort, and link-sharing services. H-FSC allows a more flexible resource management than H-PFQ by decoupling the delay and bandwidth allocation. From a conceptual point of view, H-FSC is the first algorithm that goes beyond Fair Queueing and still satisfies all important requirements in integrated services networks. To find out more about and H-PFQ and HFSC's technical advantages, click here.
HFSC is a Fair Queueing algorithm, or to simplify, HFSC is a queueing algorithm. The queueing algorithm schedules the queue. This is fundamental stuff…
The whole premise for this thread is redundant and useless at best, except for the CoDel idea.
-
If HFSC did queuing, there would be no point in indicating they type of queue you want(CoDel, Red, etc). If HFSC did fair queuing within a queue, there would be no point of fq_Codel. HFSC does not do queuing in the normal usage of the term "queue". HFSC is not an AQM.
AQMs and schedulers serve two completely different purposes. One handles scheduling the dequeing of queues and the other handles how buffering occurs within the queue. Some implementations, like cake, combine the two together, like how ZFS combines volume management with the file system. Not to say cake gains as much awesomeness as ZFS did by combining abstractions.
Because they are two different areas of research, they have slightly different usages for their terms, even if they use the same words with similar meaning. "Queuing" in scheduling is different than "queuing" queue management.
"decouple delay and bandwidth" is worthless if you have 500ms of packets in backlog in your queue. HFSC does nothing about managing the bufferbloat issue, you need an AQM, but HFSC does an excellent job making sure your queues get their minimum allocated bandwidth and allows "decoupling" packet delays from available bandwidth.
No longer does your 1Mb/s queue take 12ms to send a 1500 byte packet, you can make it 1.2ms by giving it a 10Mb service curve with an appropriate "d", while still keeping it limited to 1Mb/s, but that does nothing to help bufferbloat. If you have standing queue of 100 1500 byte packets, even if the head packet takes only 1.2ms to send because of the service curve, the average delay is still going to be about 1200ms, dwarfing any benefit of your service curve.
Another way to think about it.
Queue managers attempt to reach the coveted ideal 1 packet standing queue, keeping queue time to the absolute minimum while maximizing the queue's throughput. The scheduler manages when the head packet gets sent from a specific queue.
-
RED and Codel (AQMs) dynamically control the size of the queue, not the scheduling. HFSC is the scheduler. fq_codel is the combination of CoDel (AQM) and Fair/Flow Queueing (scheduler).
In Linux, the damn module name is even called sch_hfsc.ko because it is a scheduling algorithm.
Yes, AQMs have two different purposes. AQMs control queue size. A scheduler, well… schedules. To be clear, individual flows are FIFO, but the interleaving of flows is what is scheduled by HFSC.
Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. http://www.cs.cmu.edu/~hzhang/HFSC/TALK/sld027.htm Check some of of the earlier and later slides for more info.
-
RED and Codel (AQMs) dynamically control the size of the queue, not the scheduling. HFSC is the scheduler. fq_codel is the combination of CoDel (AQM) and Fair/Flow Queueing (scheduler).
In Linux, the damn module name is even called sch_hfsc.ko because it is a scheduling algorithm.
Yes, AQMs have two different purposes. AQMs control queue size. A scheduler, well… schedules. To be clear, individual flows are FIFO, but the interleaving of flows is what is scheduled by HFSC.
Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. http://www.cs.cmu.edu/~hzhang/HFSC/TALK/sld027.htm Check some of of the earlier and later slides for more info.
I was saying HFSC is a scheduler, that's what I've been saying this entire time, you're the one saying it is not. You keep claiming it's a hybrid sch/AQM. fq_CoDel does not do scheduling in the same sense that scheduler does. A scheduler determines the rate at which a queue is dequeued, fq_codel does nothing of the sort. It just determines which packet is head and when to drop packets based on internal hashbuckets of sub-queues. fq_codel has no idea the next time it will be dequeued.
You're conflating similarly named terms between two different problem domains that use similar approaches but at different abstract layers. The nuances are very important. An example of conflating terms is internally fq_codel uses "queues", but fq_codel itself is a queue. The fact that it internally uses the computer science definition of queues as datastructures is irreverent to the fact that it is a logical network "queue". The term "queue" in this case is being use in two completely different ways.
HFSC manages inter-queue and AQMs manage intra-queue. Cake is an example of a hybrid scheduler and AQM.
"Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. "
That link is not showing an interleaving of flows, but an interleaving of queues. If HFSC interleaved flows, we wouldn't have issues with bufferbloat. -
RED and Codel (AQMs) dynamically control the size of the queue, not the scheduling. HFSC is the scheduler. fq_codel is the combination of CoDel (AQM) and Fair/Flow Queueing (scheduler).
In Linux, the damn module name is even called sch_hfsc.ko because it is a scheduling algorithm.
Yes, AQMs have two different purposes. AQMs control queue size. A scheduler, well… schedules. To be clear, individual flows are FIFO, but the interleaving of flows is what is scheduled by HFSC.
Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. http://www.cs.cmu.edu/~hzhang/HFSC/TALK/sld027.htm Check some of of the earlier and later slides for more info.
I was saying HFSC is a scheduler, that's what I've been saying this entire time, you're the one saying it is not. You keep claiming it's a hybrid sch/AQM. fq_CoDel does not do scheduling in the same sense that scheduler does. A scheduler determines the rate at which a queue is dequeued, fq_codel does nothing of the sort. It just determines which packet is head and when to drop packets based on internal hashbuckets of sub-queues. fq_codel has no idea the next time it will be dequeued.
You're conflating similarly named terms between two different problem domains that use similar approaches but at different abstract layers. The nuances are very important. An example of conflating terms is internally fq_codel uses "queues", but fq_codel itself is a queue. The fact that it internally uses the computer science definition of queues as datastructures is irreverent to the fact that it is a logical network "queue". The term "queue" in this case is being use in two completely different ways.
HFSC manages inter-queue and AQMs manage intra-queue. Cake is an example of a hybrid scheduler and AQM.
I have been saying HFSC schedules both inter-queue and intra-queue. If HFSC does no Fair Queueing intra-queue then any flow could saturate a queue.
"Check the following link for more information about Fair Queueing and the way it attempts to interleave flows fairly. "
That link is not showing an interleaving of flows, but an interleaving of queues. If HFSC interleaved flows, we wouldn't have issues with bufferbloat.
I used the wrong example there. My mistake.
but anyway, I give up.
-
"I have been saying HFSC schedules both inter-queue and intra-queue. If HFSC does no Fair Queueing intra-queue then any flow could saturate a queue."
HFSC does not do anything with flows, it does not do hashing, it doesn't do anything with IP, nothing. All it does is pull the head packet from a child queue and decide which queue goes next. It's a queue scheduler.
Fair queuing, in the context of a queue, fights buffer bloat by isolating flows from each other within the queue.
Fair queuing, in the context of a scheduler, gives a fair amount of resources between queues.Both HFSC and fq_CoDel do "fair queuing" at different levels.
-
"I have been saying HFSC schedules both inter-queue and intra-queue. If HFSC does no Fair Queueing intra-queue then any flow could saturate a queue."
HFSC does not do anything with flows, it does not do hashing, it doesn't do anything with IP, nothing. All it does is pull the head packet from a child queue and decide which queue goes next. It's a queue scheduler.
Fair queuing, in the context of a queue, fights buffer bloat by isolating flows from each other within the queue.
Fair queuing, in the context of a scheduler, gives a fair amount of resources between queues.Both HFSC and fq_CoDel do "fair queuing" at different levels.
No. Fair Queueing is exclusively concerned with flows.
https://en.m.wikipedia.org/wiki/Fair_queuing
Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The concept implies a separate data packet queue (or job queue) for each traffic flow (or for each program process) as opposed to the traditional approach with one FIFO queue for all packet flows (or for all process jobs). The purpose is to achieve fairness when a limited resource is shared, for example to avoid that flows with large packets (or processes that generate small jobs) achieve more throughput (or CPU time) than other flows (or processes).
To claim "Fair Queueing", you must separate all flows (or most of the flows, like with SFQ). Above, it says each flow gets a "separate data packet queue", meaning this is automatic and not dependant on the user manually separating the flows like your "pseudo fair-queueing" setup. HFSC is a Fair Queueing algo therefore it separates all flows, by definition.
HFSC cites many other Fair Queueing algorithms including one paper which all modern Fair Queueing algorithms attempt to approximate as closely as possible, and it is titled "A generalized processor sharing approach to flow control in integrated services networks".
For the sake of clarity, the definition of a "flow" can be found here: https://en.m.wikipedia.org/wiki/Traffic_flow_(computer_networking)
Do me a favor and read the Generalized Processor Sharing paper (or even just the wikipedia entry) along with some papers cited by HFSC and any other academic papers you can find concerning Fair Queueing. Confirm or disprove your suspicions before replying. I have read all HFSC-cited papers and dozens of related papers and I can assure you that your posts in this thread are mostly misinformation.
Edit: Fixed link, trimmed cruft.