A definitive, example-driven, HFSC Reference Thread
-
Dont put the bandwidth vaule on LAN and WAN. Put the value on qInternet. Leave the WAN and LAN blank and you should not get the error.
You want the qDNS to be under the qInterent. qLink and qInternet should be on the same level.
Yep, exactly. The idea is not to limit the interface itself but the queues, since you might have local devices lying on your WAN subnet
-
I went back to my snapshot and tried again with the bandwidth value removed from WAN and LAN and added to both qInternet queues. Same result with a slightly different error:
(There were errors loading the rule(s): pfctl: linkshare sc exceeds parent sc root_em0 - The line in question reads (0): )
-
Are you staggering the queues though?
It should be like this:
LAN
qLink - 20%
qInternet - 95% of your real download vaule
qDNS - x%
qBulk - y%x + y have to be less than value of qInternet
-
I know what it's going on here. You should not type "95%" but the real amount, in Mbps or Kbps, that represents the 95% of your real speed
-
Where, in the Bandwidth combobox or the service curve variable boxes? Or both?
Edit: Never mind. It seems to not like using % at all. I had to convert everything to Kb and Mb instead of %, and then finally my queues appeared in Status.
It should be like this:
LAN
qLink - 20%
qInternet - 95% of your real download vaule
qDNS - x%
qBulk - y%I had the proper parent/child hierarchy but that wasn't my problem.
In your example, what is the relationship between qLink and qInternet as far as bandwidth is concerned? They are both at the same level, but qLink has 20% and qInternet has 95% for a total of 115%. I don't know how that is.
-
Since there is no value on LAN , the value on qLink does not matter as it is default and for local traffic. The only thing that matters is the value on qInternet . That defines how much the child queues have.
It's kind of weird I know but it does work.
-
Since there is no value on LAN , the value on qLink does not matter as it is default and for local traffic.
Can you leave it blank or must you provide some value, which is then subsequently ignored?
Set qDNS as having 30% realtime and 10% linkshare (and bandwidth).
Another question, sorry. What is the bandwidth relationship above, and how does it figure into the parent/child calculations? If I have parent qInternet at 10Mb and children qDNS at 1 Mb and qBulk at 9Mb, do I set qDNS's RT to 300Kb and LS to 100Kb as per 30%/10% above? Is it supposed to add up to 1 Mb, the bandwidth total for the parent qInternet?
-
It needs a value. 20% is fine for a random value. qDNS can be set at whatever you want. I just used 30% as an example.
-
But what's qDNS's relationship between RT, LS and bandwidth? LS and bandwidth are the same variable, so for my example, where qDNS bandwidth is 1Mb, I would set LS to 1 Mb and RT to 3 Mb (30%)? If so, that extra 2Mb above what the bandwidth setting is, where does that come from?
-
I wouldn't set the RT to anything except for your priority queues. RT say it gets X amount of bandwidth ALL the time so if you give qDNS 3MBIT then it gets 3Mbit all the time. So that extra 2Mbit comes from whatever qInternet is set for , your case 9Mbit.
-
I would only ever use RT for VoIP traffic and ACK, typically. OK, so it draws from the parent queue. Sorry for asking endless questions but it's the only way to wrap my head around the whole thing and connect the dots between all the elements so that it makes sense. I don't like following instructions without knowing what I'm doing and why,
-
No worries. That is the best way to learn.
-
Thanks sideout, georgeman.
A couple things:
georgeman:
I know you specify this in the text, but in the floating match rules we still do not apply "quick" right? This means that the qBulk rule has to come before the more specific qDNS rule. One might be misled by the order in the text.
sideout/all:
The realtime queue you're discussing only applies when there's contention/congestion right? If I specify 30% realtime for qDNS and there is no DNS traffic being placed in the queue, the other queues are not absentmindedly robbed of the 30% bandwidth, if I understand things.
This has all been a really big help. I appreciate it. I am currently implementing the solution provided by georgeman on my bench. More later.
-
From my understanding - RT means that X% is taken automatically for that queue even if there is no traffic in the queue. At least that is the way it reads to me. For what I use it for , qGaming that is a non issue as my use of HFSC is with traffic shaping at LAN parties so qGaming is my highest priority queue. I use the following queues:
LAN
qInternet - bandwidth - 50MBit - I have 4 cable modems but putting in 200Mbit here would not be the correct thing to do as I cannot bond them to get 200Mbit
qGaming - gaming traffic - RT 30% / bandwidth - 40% / LS - 40%
qHTTPSTeam - bandwidth - 20% / LS 20%
qWEBTraffic - bandwidth - 20% / LS 20%
qACK - bandwidth - 20% / LS 20%qLink - bandwidth - 20% / LS - 20% - default queue
WAN - 5MBIT - I have 4 WAN's so each are 5Mbit upload
qLink - bandwidth - 10% / LS - 10% - Default queue
qGaming - bandwidth 40% / RT - 20% / LS 40%
qHTTPSteam - bandwidth - 20% / LS 20%
qWEBTraffic - bandwidth - 20% / LS 20%
qACK - bandwidth - 10% / LS 10%I use the floating rules to put DNS in the qGaming so it gets good response. General web traffic goes into qWEBTraffic and then I use rules to put Steam traffic into qHTTPSteam.
I use interface rules to direct gaming traffic out different wans via gateway groups. I allocate one modem / wan to like LoL gaming traffic , another to BF4 . I dedicate one modem to strictly web traffic and another modem is reserved for staff use and downloads as I have a limiter on for DHCP addresses to restrict all TCP connections to typically 25Mbit for everyone.This way I can get the best ping times and still give everyone bandwidth to do what they need without being too restrictive.
-
I'd like to see your ruleset if it isn't too much trouble.
-
Okay. This is all coming along nicely.
I have created the following queues:
WAN
qLink default bw 20% ls 20%
qInternet bw 15Mb ul 15Mb ls 15Mb
qDNS bw 5% rt 5% ls 5%
qACK bw 10% ls 10%
qVPN bw 10% rt 5% ls 10%
qBulk bw 50% ls 50%
qOpenWireless bw 2Mb ul 2Mb ls 2MbLAN
qLink default bw 20% ls 20%
qInternet bw 50Mb ul 50Mb ls 50Mb
qDNS bw 5% rt 5% ls 5%
qACK bw 10% ls 10%
qVPN bw 10% rt 5% ls 10%
qBulk bw 50% ls 50%OPENWIRELESS
qLink default bw 20% ls 20%
qInternet bw 10% ul 10Mb ls 10Mb
qOpenWireless bw 50% ls 50%The screen shot details the floating rules. All are !quick on WAN direction OUT.
The only thing that's not going to the right queue are connections from OPENWIRELESS into the qOpenWireless on WAN.
I have reset states.
I have told my pass any any rule on OPENWIRELESS to queue to qOpenWireless. That gets downloads into the right queue on the OPENWIRELESS interface but uploads are still going to qBulk.
I am assuming this is because the floating rule on WAN OUT is happening post-NAT so the match on the source address is failing. I just fixed this by, instead of my pass any any rule on OPENWIRELESS setting the queue to qOpenWireless, I instead mark the packet with "OW" and use that in a floating rule on WAN OUT to match on "OW" and place the traffic into qOpenWireless. Seems to work.
![Screen Shot 2014-07-27 at 11.15.48 AM.png](/public/imported_attachments/1/Screen Shot 2014-07-27 at 11.15.48 AM.png)
![Screen Shot 2014-07-27 at 11.15.48 AM.png_thumb](/public/imported_attachments/1/Screen Shot 2014-07-27 at 11.15.48 AM.png_thumb) -
@KOM:
I'd like to see your ruleset if it isn't too much trouble.
I will post up my latest rule set on Monday when I pull it off my lab firewall.
-
Here are my FW rules and alias's. You need the Alias's as well to use my rule set.
https://www.dropbox.com/s/y7dtifw12y6ghmx/fwrulesalias.zip
-
@KOM:
In your example, what is the relationship between qLink and qInternet as far as bandwidth is concerned? They are both at the same level, but qLink has 20% and qInternet has 95% for a total of 115%. I don't know how that is.
No, again, do not set "95%" for the value, but the real downlaod/upload speed multiplied by 0.95 aprox
Consider that the interface speed is usually either 100 Mbps or 1000 Mbps, while the real download speed is, let's say, 5 Mbps. In this case, you would set the m2 and bandwidth values of the qInternet queue at 4.75 Mbps.
Then, if you set qLink at 20% that would be 100 Mbps x 0.2 = 20 Mbps.
So the sum of the bandwidth assigned to both root queues is actually 24.75 Mbps (< 100 Mbps)We say that the 20% value for the qLink queue doesn't matter because in this examples, it is destined just for local traffic, there is not upperlimit on it and usually the qInternet values are way lower than the interface speed. If this is not the case, or if you want to be strictly accurate, you would need to set the qLink value to the difference between the interface speed and qInternet values, so the sum of them adds up to 100% (or the interface speed)
Anyway, linkshare values are not absolute values, but relative to each other. It doesn't matter if they don't add up to 100%, what matter is the proportions between them. If you set ls to 20 on one queue and to 1 on another, that means that ls will try to give the first queue 20 times more bandwidth than the second one (when the link is saturated)
georgeman:
I know you specify this in the text, but in the floating match rules we still do not apply "quick" right? This means that the qBulk rule has to come before the more specific qDNS rule. One might be misled by the order in the text.
Exactly. Traffic will first match the qBulk, then the qDNS too, and will stick to the last one that matched
sideout/all:
The realtime queue you're discussing only applies when there's contention/congestion right? If I specify 30% realtime for qDNS and there is no DNS traffic being placed in the queue, the other queues are not absentmindedly robbed of the 30% bandwidth, if I understand things.
Yep, as you say. By specifying realtime on one queue, if that bandwidth isn't being used it is still available to other queues.
–------
There are tons of factors and caveats to consider for an accurate implementation. For example, RT considers the service curve since traffic started, so it could be "penalized" on high-usage times if it was exceeded during non-peak times and we are not using linkshare (not what we want!)
This is why some people (I tend to agree with this), suggest to use RT only to fulfill latency requirements and not bandwidth requirements (which can be handled by linkshare). And when you use RT, set the service curve on LS to the same values as RT (to account for the above scenario)
-
Ok. Moving on to the OpenVPN prioritization.
My Site-to-Site OpenVPN to the office is on server aliased to work_vpn UDP 1195.
I have qVPN on WAN and LAN set at bw 10% rt 5% ls 10%
Floating rule: WAN out dest work_vpn UDP 1195 none/qVPN
That places traffic sent to the VPN in qVPN but none of the return traffic is going into qVPN on LAN.
I haven't been able to get traffic received through the VPN into qVPN on LAN.
I have tried
Floating: LAN out source remote_vpn_lan any none/qVPN
Floating: WAN in source work_vpn UDP 1195 none/qVPNI know that I can't apply queues to virtual interfaces (OpenVPN) only physical. Not sure what I need to do here.
Edited:
I think I solved this with the following rules:
Floating Match LAN in any source any dest remote_vpn_lan none/qVPN
Floating Match WAN out UDP source any dest work_vpn 1195 none/qVPNIt looks like one of the necessary concepts to grasp is your rules have to be implemented so they catch the traffic at the point of state creation.
It looks like this also works:
Floating Match OpenVPN any any source any dest any none/qVPN
Floating Match WAN out UDP source any dest work_vpn 1195 none/qVPNI would think that the former could be used to queue a specific VPN out the LAN interface and the latter would be an easy way to do the same with all OpenVPN traffic.