@Harvy66:
I'm not disagreeing with them, I'm just saying math proofs don't tell you how to use HFSC practically, only that it works "as expected" assuming you understand it. 99% of well experienced and educated people in networking will not understand how network traffic works in sub 1ms time scales, so don't even think about time scales that small. I do agree that my "evidence" is anecdotal, I have limited tools. Another term for "anecdotes" is "data points", albeit with high uncertainty, there is some truth to anecdotes.
In order to property configure HFSC the way you you've been giving examples, you would need to know these and how these interact:
Packet Size
PPS(Packets per Second) during transmission
Average Bandwidth
Number of flows
Distribution of packets within a flow
Distribution of packets among all flows
The examples the HFSC links use are simple to prove a point, but do not reflect the complications of a real network. To take the example at face value is to over-simplify the issue, in other words, don't think about individual packets unless that's exactly what happens at the bandwidths you're working with.
**Up to this point I've been arguing an almost laissez faire sort of configuration while you've been arguing an extremely precise. I know I was mostly playing devil's advocate to give an alternative view, but I think the middle ground is best. In this paper(http://www.cs.cmu.edu/~hzhang/papers/SIGCOM97.pdf) that you linked at one point in another thread, it talks about 160byte VoIP packets with an average of 64Kb/s. The way they configured and talked about the burst duration is as a target latency. They set the duration to 5ms as they wanted a 5ms target. m1 was set to the packet size, 160 bytes(not bandwidth like you've mentioned), then spread over the 5ms, they gave the example of 256Kb/s. Because 160bytes/5ms=256Kb.
What I was able to gain from this example is you set the m1 bandwidth equal to the size you wish to "burst", and the duration to the time in which you wish the extra bytes to be transferred.
I feel fairly confident that the d(duration), for the purpose of latency sensitive traffic, should be set to your target worst latency. m1 should be thought of not as bandwidth, but the size in bits of the total number of packets relieved during that duration. m2 would be set as the average amount of bandwidth consumed.**
Incorrect. m1 and m2 define bandwidth.
@Nullity:
This quote from the HFSC paper is useful because it clarifies differences between the parameters used in the paper (u-max, d-max, r), and the parameters that pfSense uses (m1, d, m2).
Each session i is characterized by three parameters: the largest unit of work, denoted u-max, for which the session requires delay guarantee, the guaranteed delay d-max, and the session's average rate r. As an example, if a session requires per packet delay guarantee, then u-max represents the maximum size of a packet. Similarly, a video or an audio session can require per frame delay guarantee, by setting u-max to the maximum size of a frame. The session's requirements are mapped onto a two-piece linear service curve, which for computation efficiency is defined by the following three parameters: the slope of the first segment m1, the slope of the second segment m2, and the x-coordinate of the intersection between the two segments x. The mapping (u-max, d-max, r) -> (m1, x, m2) for both concave and convex curves is illustrated in Figure 8.
The paper uses u-max (packet/frame size) in it's examples, but pfSense uses m1 (bandwidth).
When glancing at a configuration using m1, d, and m2, it is obvious whether the parameters are meant to decrease delay (m1 > m2) or increase delay (m1 < m2).
The u-max/d-max notation is not as intuitive.
I pulled the quote from the HFSC thread, if you would like to move this HFSC-related conversation there.
https://forum.pfsense.org/index.php?topic=89367.0
Edit: Fixed typo in quote from HFSC paper.