The topology of hierarchical queues can be surprisingly powerful.
-
Although I am reasonably acquainted with HFSC's overly-complicated abilities, one thing that always surprised me was how powerful the topological layout of the queues could be.
Before assigned any bandwidth to a queueu, the layout needs to be carefully considered.
How will you group it? By interface, location, bandwidth requirements, latency requirements?
You could group by time, with a queue housing 2 types of traffic, one that is active in the morning, and the other active at night.There are a lot of ways to set it up, and I honestly think most people could accomplish more with a clever topology, considering that very few actually use HFSC properly.
Here are some pics show-casing what I am referring to. Notice the measured difference achieved by only a simple topology change. The images are from the author of HFSC.
Before
After
Meh… it impresses me. ;)
If anyone has some good texts on how to fully exploit hierarchical queueing's topology in interesting ways, please share. Any tricks or examples of using multi-level hierarchies. Any ideas about how to separate traffic so that a multi-level hierarchy could make the most impact.
I have high hopes that some clever topology guide could prove more useful for pfSense newbies, especially since it is primarily visual.
-
One example could be this
qNormal 80%
–qHigh 50%
--qLow 50%
qIdle 20%
--qDefault 50%
--qUDP 50%Give your classified traffic 80%, with a 50/50 split between high priority and everything else. Then the 20% is for unclassified traffic, with 50% for UDP. You can create a floating rule at the top that sets all UDP traffic to qUDP, but later rules will unset that. This will separate your UDP and TCP traffic.
-
I went with this setup lastnight.
20% for ACKs, but then qClassified is the golden ratio more than qUnclassified. qUnclassified is where my unclassified and P2P traffic goes, because P2P is so hard to classify. I have broken qUnclassified into to groups, UDP and the normal default queue, which will primarily be unclassified TCP. I have a floating rule at the very top to match all UDP traffic and place it in qUDP.
The bandwidth is split 50/50 between qDefault and qUDP, but qUDP has a service curve that gives a 25% boost over 5ms. My connection is quite fast, so 5ms is a long time. Based on my limited understanding of service curves, m1 is the pseudo-bandwidth, for lack of a better term, and d is the target latency(must be a realistic value for your connection), and of course m2 is your actual real bandwidth.
I don't want to get into the exactness of how service curve work, but the one example that I saw was a 64kb rate where they wanted to cut the jitter in 1/4, so they gave the queue an m1 of 128kb and m2 of 64kb, because they wanted a 64kb average but wanted the link to act like 128kb when it came to scheduling the packet. I saw "packet" and not "packets", because 64kb is such a slow rate and the size of the packets, that it worked out to take 10ms to transfer one packet. So they setup the curve to have a d(duration) of 20ms. The final result was 128kb 10 64kb m1/d/m2.
The way I interpreted that one example, is if you have latency sensitive traffic there the intervals of traffic result in a burst relative to the provisioned sustained bandwidth, then you can set m1 and d such that if the queue is still within its average bandwidth, it can get a "burst"(term used very loosely), in order to allow the packets to get scheduled sooner than they would have otherwise for their sustained bandwidth. Of course in order to reduce the delay of one queue, you need to increase the delay of the other queues, not that I care for my "normal" traffic. The over all average bandwidth is still maintained, and the two queues will still have an average split of 50/50. This also implies that the "burst" is a debt to be repaid by consuming less bandwidth after the burst.