A definitive, example-driven, HFSC Reference Thread
-
I got these rules when I was done. The two DNS rules are Match on WAN in any direction. Not quick.
![Screen Shot 2014-07-21 at 11.17.19 PM.png](/public/imported_attachments/1/Screen Shot 2014-07-21 at 11.17.19 PM.png)
![Screen Shot 2014-07-21 at 11.17.19 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2014-07-21 at 11.17.19 PM.png_thumb)
![Screen Shot 2014-07-21 at 11.30.55 PM.png](/public/imported_attachments/1/Screen Shot 2014-07-21 at 11.30.55 PM.png)
![Screen Shot 2014-07-21 at 11.30.55 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2014-07-21 at 11.30.55 PM.png_thumb) -
I then started a large download and opened a terminal window. This is where things don't seem quite right. In the terminal window I repeatedly entered 'dig @8.8.8.8 www.google.com'. Query time really suffered during the download, like 700-1000ms up from about 75ms with no other activity. Rather than paste a bunch of screen shots, I am pasting the pertinent section of /tmp/rules.debug and a pfTop capture.
I find it interesting that all the downloads are on qLink instead of qInternet. Downloads don't appear to be subject to shaping at all until there's contention for the gigabit LAN. Am I missing something?
![Screen Shot 2014-07-21 at 11.16.55 PM.png](/public/imported_attachments/1/Screen Shot 2014-07-21 at 11.16.55 PM.png)
![Screen Shot 2014-07-21 at 11.16.55 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2014-07-21 at 11.16.55 PM.png_thumb)
![Screen Shot 2014-07-21 at 11.35.05 PM.png](/public/imported_attachments/1/Screen Shot 2014-07-21 at 11.35.05 PM.png)
![Screen Shot 2014-07-21 at 11.35.05 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2014-07-21 at 11.35.05 PM.png_thumb) -
The problem with your approach is assuming that there are a lot of people here who know HFSC inside & out, and are willing to help. My experience here is that's just not the case. I have only seen a small handful of people who really seem to know HFSC, and I'm sure they're not interested in providing free consulting. I spent a lot of time trying to fully understand HFSC and read a lot about it online from every source I could find. In the end, I switched to PRIQ. So much easier to understand & manage.
-
That is what this thread is intended to remedy. I don't think it is a matter of people not wanting to provide free consulting, I think it's simply a time issue. I am not asking for someone to say "Here is the config for your network," but I am hoping that either I can figure this out piece by piece, or someone might chime in with a piece of the puzzle occasionally. I know I could just punt and use PRIQ, but I don't want to. :)
I haven't quite nailed it down yet, but it seems having the qLink queues tagged as "default" is a problem.
I created qDefault queues under qInternet and changed them to default (removing default from qLink) and I started getting more desired behavior.
I then created floating match rules for the LAN<->DMZ traffic, sending it to qLink and my testing wasn't positive. Seemed like it was only going into qLink in one direction or something. I haven't been back to it since. Maybe tonight.
The wizard output for WAN seems to be pretty close. It's the qLink/qInternet queues on LAN ports that appear to need some work.
-
IMO, HFSC is only really useful in the case where you have a heavily saturated link with time-sensitive devices on the network, but you want to maintain minimum service levels for the lower queues. With PRIQ, your higher queues will always be serviced first to the possible severe detriment of the lower queues. Everything else can be done via PRIQ or limiters. However, I'm happy to help however I can. I've forgotten a lot of what I thought I knew, but isn't qLink the Linkshare queue? When all RT guarantees are met, all available bandwidth is allocated to LS aka qLink, with a max of 100% and a minimum guarantee of whatever your bandwidth setting for qLink is, IIRC.
-
Alright!
First and most important of all (I think that I mention this on at least 70% of my topics here): shaping for multi-LAN does NOT work as one would expect, even if the wizard pretends to configure it. Download is shaped on the LAN side, but you cannot have a queue applying to more than one interface at a time. So you cannot "share" download bandwidth between download interfaces. However, you could set a hard limit of 500 Kbps on each of your LAN queues, in a way that the sum of them will never exceed what your ISP provides you. Obviously, this is plain awful and far from optimal.
There are currently 3 ways around this (that I can think about):
- You can bridge the 3 interfaces and apply the shaper to the bridge (you can still control inter-LAN traffic fairly well with rules)
- You can use VLANs on the same interface + an L2 switch, and apply the shaper to the physical interface
- You can use another pfSense box in front of the other one, just for shaping purposes.
–
Now, as regards HFSC, I suggest to avoid using the wizard and configure it yourself from scratch, it is not that difficult. Here are some tips (although most of these apply to all schedulers):- Remember this is a stateful firewall and the shaper is tied to it. When the traffic goes through the firewall, you only need to queue it once (either incoming on one interface or outgoing on the other one, and just in one direction). On the other side, it will get on the queue that has the same name (and this is why queues on LAN and WAN have the same names). You are in fact shaping connections and not packets.
- The whole traffic shaping thing is based on the assumption that you know the download and upload limits of the connection. The idea is that queuing occurs at your router (where you can control it) and not at your ISP's router. This is why it is so important to properly set the upper limits on the parent queues (and this is also why the multi-LAN I mentioned before does not work)
- I prefer to queue outgoing connections (LAN to internet) with floating rules, direction OUT, on the WAN interface. Always specify the interface and direction on floating rules, otherwise it will quickly become a nightmare to debug (which rule is really applying??)
- I prefer to queue incoming connections (NAT port forwards and alike) on the regular interface tab (you need an allow rule there, anyway).
- Set the qLink queue as default queue on every interface. Then make proper matching rules to direct the traffic wherever you want.
- Remember that floating rules, action match, are not "quick" rules. This means that all rules will apply in order, and the last one matching wins. This means that the logic is sort of the inverse of the regular interface rules. Here the most general rule needs to go first, and more specific ones below it. Keep this in mind!!
- The only protocol where the "ack queue" concept makes sense is TCP. For whatever else, keep the selection empty.
- The "priority" option does not do anything on HFSC (check the source code!). In fact, I'm not sure why the box is still there…
- For a matter of simplicity, on HFSC always set the same value on both linkshare m2 and bandwidth. They are actually the same thing, but if your values differ you will not know which is the one applying…
- For now, lets forget about the m1 and d parameters, so everything I say applies to m2.
--
Practice time! Delete all queues and related rules and start from scracth (also, for the reasons stated before, all examples involve just one LAN).Let's start by priotizing DNS traffic. Create this hierachy on both LAN and WAN:
*qInternet
---qDNS
---qBulk
*qLinkYou will set both the upperlimit and linkshare of qInternet on LAN to around 95% of your real download speed, and both upperlimit and linkshare on WAN as 95% of your real upload speed. Also, on both interfaces set qLink as the default queue. Why the linkshare value? Because on this hierachy, this is a parent queue and you will in fact be queueing traffic on the children queues.
Set qDNS as having 30% realtime and 10% linkshare (and bandwidth).
Set qBulk as having 90% linkshare (and bandwidth)
Set qLink as having 20% linkshare (and bandwidth). In fact, it doesn't really matter what value you put here since this queue is destined for local traffic.What's the deal here? Realtime is guaranteed bandwidth. This means that no matter what the other queues want, this queue will get 30% of the available bandwidth for itself at any given time (*), even if by doing so other queues need to drop packets. The sum of all the realtime values cannot exceed 80% of your bandwidth (note: 30% for DNS is insanely high on a real life scenario, but this is just for learning purposes)
To keep this simple, always try that the sum of the linkshare values from the children queues sum up to the value of the parent queue. This is because HSFC uses a "sustractive" method for the percentages (I can elaborate of this later).
Now, let's go for the rules. On the floating tab, create these rules:
- Interface WAN, direction OUT, protocol UDP, destination ! WAN net, port 53 –> queued onto qDNS
- Interface WAN, direction OUT, protocol any, destination ! WAN net –> queued onto qBulk
I like to include "destination ! WAN net" on these rules to keep potential traffic intented to some other host on the WAN subnet out of the internet queues but on the default qLink.
That's it. A quick test should reveal that DNS queries are being placed on the appropriate queue.
IMPORTANT NOTE: this doesn't mean that DNS queries will be magically faster, there are several factors to consider. For example, if you have 500 torrents active at the same time, clogging up your bandwidth, probably there's no shaping on Earth that can help…
These very same principles can be applied to shaping any other kind of traffic.
Feel free to ask any questions. Hope this helps! (I should probably write a Wiki article at some point, though). Next lesson we can talk about the ACK queues ;)
Best regards
EDIT:
(*) while re-reading what I wrote, "at any given time" here sounds somewhat misleading because what realtime actually guarantees is not the instantaneous bandwidth but the long-term average. In real life on a properly configured hierachy, it's unlikely that the traffic gets penalized for having exceeded its bandwidth in the long term, though. Still, the best use of realtime queues is to handle latency through properly set d and m1 parameters, and NOT prioritization of packets
ANOTHER EDIT: for a deep explaination on HFSC, this is my favourite source of info: http://linux-tc-notes.sourceforge.net/tc/doc/sch_hfsc.txt
-
Thanks a lot George for your contribution. I've seen you here before and I've read all your HFSC posts, but I didn't think you were around much anymore.
Question: As I was creating the queues as per your directions, the GUI kept yelling at me 10-11 times about this same error:
(There were errors loading the rule(s): pfctl: the sum of the child bandwidth higher than parent root_em0 - The line in question reads (0): )
Did you see this? I went through and checked everything again and it looks good, but I still have the errors. Is this normal when you're building the queues manually?
When I look at Status - Queues, all I have are Root Queue and qLink. All the others aren't being shown.
-
@KOM:
Thanks a lot George for your contribution. I've seen you here before and I've read all your HFSC posts, but I didn't think you were around much anymore.
Question: As I was creating the queues as per your directions, the GUI kept yelling at me 10-11 times about this same error:
(There were errors loading the rule(s): pfctl: the sum of the child bandwidth higher than parent root_em0 - The line in question reads (0): )
Did you see this? I went through and checked everything again and it looks good, but I still have the errors. Is this normal when you're building the queues manually?
When I look at Status - Queues, all I have are Root Queue and qLink. All the others aren't being shown.
That message shows up when the sum of the bandwidth assigned to the children queues is higher than the parent queue. There must be some value not set properly. Probably the bandwidth of the interface itself? (leave it blank)
You can post the generated rules from rules.debug and we'll see…
-
In my lab, I gave it an arbitrary bandwidth of 9Mb/s up/down. Then I followed your directions exactly and pumped in all the specified percentages. Did you mean to say that actual percentages should be used in the GUI, or were you meaning that we should pump in n% of the actual bandwidth number? maybe I'll run through it again here at home. Won't be back in the office until Monday.
Edit: SO I just spun up the lab and tried it. I set WAN and LAN bandwidth to 9Mb/s. I created qLink next, set it to default, gave it bandwidth 20% and LS 20%. Then I created qInternet and gave it UL 95% LS 95%. As soon as I applied it, the same error appears. See attached pics.
-
Dont put the bandwidth vaule on LAN and WAN. Put the value on qInternet. Leave the WAN and LAN blank and you should not get the error.
You want the qDNS to be under the qInterent. qLink and qInternet should be on the same level.
-
Thank you. That wasn't clear to me. No errors when I applied the shaper.
Edit: Not so fast. I went to Status - Queues and it failed to load the queues at all, and the error appeared in the statusbox again.
-
I would blow it all away and redo it from scratch. I can try it in my lab later and see what I get.
-
Dont put the bandwidth vaule on LAN and WAN. Put the value on qInternet. Leave the WAN and LAN blank and you should not get the error.
You want the qDNS to be under the qInterent. qLink and qInternet should be on the same level.
Yep, exactly. The idea is not to limit the interface itself but the queues, since you might have local devices lying on your WAN subnet
-
I went back to my snapshot and tried again with the bandwidth value removed from WAN and LAN and added to both qInternet queues. Same result with a slightly different error:
(There were errors loading the rule(s): pfctl: linkshare sc exceeds parent sc root_em0 - The line in question reads (0): )
-
Are you staggering the queues though?
It should be like this:
LAN
qLink - 20%
qInternet - 95% of your real download vaule
qDNS - x%
qBulk - y%x + y have to be less than value of qInternet
-
I know what it's going on here. You should not type "95%" but the real amount, in Mbps or Kbps, that represents the 95% of your real speed
-
Where, in the Bandwidth combobox or the service curve variable boxes? Or both?
Edit: Never mind. It seems to not like using % at all. I had to convert everything to Kb and Mb instead of %, and then finally my queues appeared in Status.
It should be like this:
LAN
qLink - 20%
qInternet - 95% of your real download vaule
qDNS - x%
qBulk - y%I had the proper parent/child hierarchy but that wasn't my problem.
In your example, what is the relationship between qLink and qInternet as far as bandwidth is concerned? They are both at the same level, but qLink has 20% and qInternet has 95% for a total of 115%. I don't know how that is.
-
Since there is no value on LAN , the value on qLink does not matter as it is default and for local traffic. The only thing that matters is the value on qInternet . That defines how much the child queues have.
It's kind of weird I know but it does work.
-
Since there is no value on LAN , the value on qLink does not matter as it is default and for local traffic.
Can you leave it blank or must you provide some value, which is then subsequently ignored?
Set qDNS as having 30% realtime and 10% linkshare (and bandwidth).
Another question, sorry. What is the bandwidth relationship above, and how does it figure into the parent/child calculations? If I have parent qInternet at 10Mb and children qDNS at 1 Mb and qBulk at 9Mb, do I set qDNS's RT to 300Kb and LS to 100Kb as per 30%/10% above? Is it supposed to add up to 1 Mb, the bandwidth total for the parent qInternet?
-
It needs a value. 20% is fine for a random value. qDNS can be set at whatever you want. I just used 30% as an example.