HFSC explained - decoupled bandwidth and delay - Q&A - Ask anything
-
It's kind of both. The scheduling is done at the queue level, but it is done in discrete steps of packets. The service curve is just the priority, which every curve is "greater" at any given moment is scheduled next. The m1+d modify the curve to make it look like it is greater. If sending the packet will push it beyond its limit, it won't send the packet yet. Of course if there is spare bandwidth and no other packets, it'll just send the packet anyway.
I just think of HFSC as a dynamic priority based queue where the priority is adjusted in real-time per packet processed, such that all queue configurations are respected. Of course my simplification breaks down in extreme situations, like absurdly low bandwidths, absurdly large MTUs, or absurdly low target latencies, but it's a good rule of thumb.
-
It's kind of both. The scheduling is done at the queue level, but it is done in discrete steps of packets. The service curve is just the priority, which every curve is "greater" at any given moment is scheduled next. The m1+d modify the curve to make it look like it is greater. If sending the packet will push it beyond its limit, it won't send the packet yet. Of course if there is spare bandwidth and no other packets, it'll just send the packet anyway.
I just think of HFSC as a dynamic priority based queue where the priority is adjusted in real-time per packet processed, such that all queue configurations are respected. Of course my simplification breaks down in extreme situations, like absurdly low bandwidths, absurdly large MTUs, or absurdly low target latencies, but it's a good rule of thumb.
(Using the scenerio from earlier.)
If 8 packets are instantly queued and m1/d was per queue, the queue would only get 256Kbit for 5ms which would only be 1 packet transmitted, violating the HFSC's latency guarantees for the other 7 packets.If it were per queue we'd need to set m1 to 8x the bandwidth to support 8 concurrent sessions, which may be true, but I think m1/d is per packet since HFSC's m1/d/m2 correspond with the umax/dmax/r, which is defined in page 10 the HFSC paper as follows:
Each session is characterized by three parameters: the largest unit of work, denoted umax, for which the session requires delay guarantee, the guaranteed delay dmax, and the session's average rate r. As an example, if a session requires per packet delay guarantee, then umax represents the maximum size of a packet. Similarly, a video or an audio session can require per frame delay guarantee, by setting umax to the maximum size of a frame.
umax = packet and umax/dmax directly correspond with m1/d, therefore m1/d are per packet as well.
Edit: Also, I'm not sure what "session" means exactly… maybe flow?
I don't understand the HFSC paper very well, but the few parts I do understand only mention single session (flow?) queues, which may be the only way to to properly use HFSC, but that seems strangely limited.We'd had this per packet/queue argument before and I had few more references, but I forget them now. Maybe they're in this thread? Reading the source-code or even the pseudo-code found in the HFSC paper could answer our question, but it seems too advanced for me...
Testing this per queue/per packet theory should be reasonably easy. Someday... :-\
-
(Using the scenerio from earlier.)
If 8 packets are instantly queued and m1/d was per queue, the queue would only get 256Kbit for 5ms which would only be 1 packet transmitted, violating the HFSC's latency guarantees for the other 7 packets.Latency guarantees are only guaranteed if you don't go past your bandwidth limit. If you burst 8 1280bit(160byte) packets at once, the last packet is going to have 8960bits of data in front of it and 10,240 including itself. To send 10240 bits in 5ms, you need 2mbits of bandwidth. You can't have your cake and eat it to. It is impossible in this universe to guarantee all 8 packets will get sent within 5ms with anything less than ~2Mb/s.
Even in this case, the average bandwidth is 1280bits * 20/sec(every 50ms) * 8 packets = 204,800bps, which is about 1/10th the bandwidth you need to make sure none of those 8 packets hang in the queue for more than 5ms. You need about 2Mb/s, but you only need it for 5ms. Your average is 204.8Kb/s.
m1 = burst
d = duration of burst (of course you can't send fraction packets, so you need to make sure all the packets you want to burst fit m1*d)
m2 = overall averagem1 is moot if you attempt to go over m2.
*my interpreation
-
(Using the scenerio from earlier.)
If 8 packets are instantly queued and m1/d was per queue, the queue would only get 256Kbit for 5ms which would only be 1 packet transmitted, violating the HFSC's latency guarantees for the other 7 packets.Latency guarantees are only guaranteed if you don't go past your bandwidth limit. If you burst 8 1280bit(160byte) packets at once, the last packet is going to have 8960bits of data in front of it and 10,240 including itself. To send 10240 bits in 5ms, you need 2mbits of bandwidth. You can't have your cake and eat it to. It is impossible in this universe to guarantee all 8 packets will get sent within 5ms with anything less than ~2Mb/s.
Even in this case, the average bandwidth is 1280bits * 20/sec(every 50ms) * 8 packets = 204,800bps, which is about 1/10th the bandwidth you need to make sure none of those 8 packets hang in the queue for more than 5ms. You need about 2Mb/s, but you only need it for 5ms. Your average is 204.8Kb/s.
m1 = burst
d = duration of burst (of course you can't send fraction packets, so you need to make sure all the packets you want to burst fit m1*d)
m2 = overall averagem1 is moot if you attempt to go over m2.
*my interpreation
Any response to the HFSC paper saying that m1/d & umax/dmax are per packet?
It's confusing since m2 is obviously per queue.
-
(Using the scenerio from earlier.)
If 8 packets are instantly queued and m1/d was per queue, the queue would only get 256Kbit for 5ms which would only be 1 packet transmitted, violating the HFSC's latency guarantees for the other 7 packets.Latency guarantees are only guaranteed if you don't go past your bandwidth limit. If you burst 8 1280bit(160byte) packets at once, the last packet is going to have 8960bits of data in front of it and 10,240 including itself. To send 10240 bits in 5ms, you need 2mbits of bandwidth. You can't have your cake and eat it to. It is impossible in this universe to guarantee all 8 packets will get sent within 5ms with anything less than ~2Mb/s.
Even in this case, the average bandwidth is 1280bits * 20/sec(every 50ms) * 8 packets = 204,800bps, which is about 1/10th the bandwidth you need to make sure none of those 8 packets hang in the queue for more than 5ms. You need about 2Mb/s, but you only need it for 5ms. Your average is 204.8Kb/s.
m1 = burst
d = duration of burst (of course you can't send fraction packets, so you need to make sure all the packets you want to burst fit m1*d)
m2 = overall averagem1 is moot if you attempt to go over m2.
*my interpreation
Any response to the HFSC paper saying that m1/d & umax/dmax are per packet?
It's confusing since m2 is obviously per queue.
Well, actually, it doesn't exactly say it's per-packet. It just says "if a session requires per packet delay guarantee, then umax represents the maximum size of a packet"… so, would 7 separate VOIP flows be considered a single session or multiple sessions? Is a queue's traffic considered a single "session", regardless?
Fuck, I really don't want to read the HFSC paper for the 100th time... Though, IIRC, the paper only mentions single-flow scenerios, so the source-code might provide the only proper answer.
I'm beginning to think you may be right though; m1 should be multiplied × 7 to support 7 concurrent flows.
HFSC... :-\
-
Again From: http://linux-tc-notes.sourceforge.net/tc/doc/sch_hfsc.txt
Example 3.
That worked, but not as well as you hoped. You recall TCP can
send multiple packets before waiting for a reply and you would
like them all sent before web gets to send it's first packet.
You also estimate you have 10 concurrent users all doing the same
thing. From a packet capture you decide sending 4 packets before
waiting for a reply is typical.10 users by 4 packets each means 40 MTU sized packets. Thus you
must adjust the burst speed, so ssh gets 40 times the speed of
web and you must allow for 40 MTU sized packets in the burst
time:tc class add dev eth0 parent 1:0 classid 1:1 hfsc
ls m1 24kbps d 60ms m2 500kbps # web
tc class add dev eth0 parent 1:0 classid 1:2 hfsc
ls m1 975kbps d 60ms m2 500kbps # sshNote that 975:24 is 40:1, and (40 * 1500 / 1mbps = 60).
Seems to imply the same. Though I think that math might be wrong. Seems like it should be (40 * 1500 * 8 / 1mbps = 457ms) (A 1500B MTU is 12000 not 1500)
-
Again From: http://linux-tc-notes.sourceforge.net/tc/doc/sch_hfsc.txt
Example 3.
That worked, but not as well as you hoped. You recall TCP can
send multiple packets before waiting for a reply and you would
like them all sent before web gets to send it's first packet.
You also estimate you have 10 concurrent users all doing the same
thing. From a packet capture you decide sending 4 packets before
waiting for a reply is typical.10 users by 4 packets each means 40 MTU sized packets. Thus you
must adjust the burst speed, so ssh gets 40 times the speed of
web and you must allow for 40 MTU sized packets in the burst
time:tc class add dev eth0 parent 1:0 classid 1:1 hfsc
ls m1 24kbps d 60ms m2 500kbps # web
tc class add dev eth0 parent 1:0 classid 1:2 hfsc
ls m1 975kbps d 60ms m2 500kbps # sshNote that 975:24 is 40:1, and (40 * 1500 / 1mbps = 60).
Seems to imply the same. Though I think that math might be wrong. Seems like it should be (40 * 1500 * 8 / 1mbps = 457ms) (A 1500B MTU is 12000 not 1500)
Yeah, he makes that mistake in example 2 as well. I tried emailing him months ago but the email bounced.
I guess it makes more sense if m1/m2 were both per queue.
-
I really need to add, my interpretation is entirely based on their description of the problem, a high abstract level of how their trying to solve it, what variables are provided to the end-user (m1/d/m2) they are using to represent how to address the issue, and the desired result.
I am filling in the implementation blanks with how I would go about "solving" the issue at a lower but still abstract level. This gives rise to how I think the variables are meant to be used. Typically the simplest answer is the best answer, and to me, I think this is the simplest way to think about the problem. But dear lord, the math behind and implementation details. Nope, I can't handle that. I find it easier to attempt to understand the problem and infer the details than to read the details directly.
With nothing empirical backing up my opinions, I am very open to corrections. Nullity has already corrected my notion of a "burst". I was at first thinking of it more like additive to the average(think comcast's "Powerboost"), but it turns out the average must be respected. At this point I realized it was not a way to add more bandwidth for a short amount of time, but a way to compress the available bandwidth into a uneven distribution.
Packets aren't evenly distributed, so the problem attempting to be solved is to reduce latency caused by this uneven distribution. Some of the uneven distribution is caused by bursts of packets, but other parts is because packets are atomic and of the packet is too large for the available bandwidth (this is the case they focus on when describing the problem), your latency may go up. You can hide this latency without increasing the average bandwidth by "bursting" the bandwidth and making it unevenly distributed.
Of course this happens by temporarily increasing the bandwidth at the micro-level, but not the macro-level.
This is what I think they're trying to solve with m1+d.
edit:
Another way to say it is bandwidth and latency are naturally coupled. For example, you cannot send a 1500 byte packet on 1Mb/s link in less time than 12ms (assuming store-and-forward). For any given bandwidth, there is a minimum latency for a given packet size.HFSC allows you to "decouple" latency and bandwidth by temporarily giving a queue more bandwidth to make it act like it has the latency of higher bandwidth, but at the same time, you don't want it to have more than its fair-share of bandwidth.
HFSC also has a secondary decoupling of bandwidth and latency, but it's not that it's trying to decouple the natural coupling of bandwidth and latency, but the pseudo-coupling of bandwidth and latency that nearly all traffic shaping algorithms seem to have. This coupling isn't a fundamental natural coupling, but a biased characteristic that most algorithms have. For example. Many times giving one queue 2x more bandwidth than another queue will result in that queue having 1/2 the latency, even when the link is idle and you attempt to send a single small packet. This is because most algorithms use a kind of "back-off" strict priory based scheduling. This causes the shaper to "wait" for free bandwidth, and the less bandwidth a queue has, the longer it wait before it decides "yes, this bandwidth really is free".
HFSC uses "curves" to dynamically change the priority of a queue. If a queue is idle, it will have the highest priority. This means a 64kbit/s queue can have a higher priority than a 1Gb/s queue. But as soon as a packet goes through that 64kb queue, suddenly its priority drops. How far it drops relative to other queues is dependent on the relations of their assigned bandwidth. In this way, HFSC can pretty much guarantee that a queue of a given bandwidth should never have a latency worse than that of dedicated link with that same amount of bandwidth(ignoring kernel scheduling issues). But a queue can have a latency as low as the actual physical link rate.
The typical observed latency is probably some where between. In most real networks, the vast majority of connections are TCP that respond to congestion and a connection is rarely capable of actually maintaining 100% saturation for more than a short burst. What this many times means is there is a bit of breathing room on the link. This causes your typical observed latency to be quite low, closer to the link rate than the provisioned rate, unless you queue is bufferbloated with a large backlog.
Bufferbloat is the enemy of all.
-
Thanks for Gr8 explanation of HFSC
I am trying to shape my game traffic (Overwatch) on my network
I have captured Packets on my Wireshark UDP packets for the game source is the game server –-> my IP
I wanted to calculate the m1 as you mentioned above, i am confused to which packet am i suppose to base my calculations on
do i pick the largest UDP packet i received or do i take an average size of a 100 packets to calculate m1
-
Thanks for Gr8 explanation of HFSC
I am trying to shape my game traffic (Overwatch) on my network
I have captured Packets on my Wireshark UDP packets for the game source is the game server –-> my IP
I wanted to calculate the m1 as you mentioned above, i am confused to which packet am i suppose to base my calculations on
do i pick the largest UDP packet i received or do i take an average size of a 100 packets to calculate m1
The largest. It might be easier to just calculate m1 as 1500, or whatever the MTU is. Unless the bandwidth allocation for your queues is very fine-grained, overallocating isn't a bad choice to assure that Overwatch never gets starved for latency/bandwidth.
-
also the game runs on a 60 tick rate , do i have to adjust the d