@mattund said in Limiters and floating rules:
I count it as "in the noise" per-say, and let it pass my mind. I find ICMP traverses the network leisurely anyway, and besides, I haven't found a way to NOT drop the traffic -- if ICMP is getting dropped a lot of other problems can seep in...
Great, thanks. I made the change and confirm that I can ping my cable modem connected to WAN even under full bandwidth load.
I have had a similar thing happen to me. I had an existing setup with limiters already applied to rules and trying to change from either QFQ or FQ_CODEL scheduler to PRIO caused a kernel panic. I was only able to isolate it to something in traffic coming in off the LAN as disconnecting the LAN stopped the system from panicing on reboot an allowed me to restore the config.
No idea what the traffic might have been causing the issue. Suffice to say I will be keeping clear of the PRIO scheduler.
Tried applying to LAN with same result
Tried lowered quantum to 300 (not sure which value i should use but found another topic with one using 300)
I have to read more about the subject before I can answer the rest. I just tried it out for fun to see what it could do. My connection does't suffer that much from bufferbloat but I have a few friends that does and wanted to see if this could help them out at some point.
I have noticed this as well. I have also had issues with my firewall loading at all under certain circumstances with this. It seems when the IPV6 gateway is unavailable, the firewall will freak out if you have specified a gateway on your IPV6-specific rules.
You can use aliases to act as IP groups. Create a PRIQ-based shaper which is one of the simplest. Create your 3 levels of queue with different priorities and then use floating firewall rules to direct traffic from an alias into the proper queue based on protocol or IP group.
@vanapagan said in Limiting guest vlan to specific mbits up/down:
We have a specific vlan we allow wireless guests to log in with. It goes directly to the internet and cannot reach anything internal. How do we limit that vlan to take no more than 10% of the upstream or downstream traffic? All other vlans will share the other 90% equally.
So if I have a 100mbit up/down I want the wireless guests to consume no more than 10% (10mbits) and leave the other 90mbits to the other vlans.
I've been playing with shaping and limiting but not have had any good results. I am at the point of not seeing the forest for the trees.
I have a setup with multiple VLANs and a "guest" one and I've now setup limiters with weights, which allows guests to use all bandwidth if it's available... effectively implementing borrowing. So far Quick Fair Queueing works with Codel queues but if I use PRIO scheduler I get kernel panics :)
I do have to add the In/Out pipe manually to each VLAN rule that allows VLAN traffic out to the world (to be NAT'ed)
@valeriy said in Limiter queue weights:
In Theory this is how it should work, but in practice I have seen some weird numbers on 2 queues..
My guess you should simulate the load, observe and correct the values. In fact, I have suspicion that it does not work as supposed and maybe it is broken? I have some strange outcome after setting up my limiters similar to your setup:
I have 5-10 hosts in one queue and 1 host in another queue, in order to achieve 50/50 bandwidth split I had to adjust value to 20 in first queue and 80 in second queue (with one host). My guess is that while using dynamic queuing, that share is applied per host, rather than queue.
This is also a question I have. Basically, I have two populations that I'd like to manage.
Main users, should have 90% bandwidth at saturation
guest users, should have 10% at saturation.
I can see that this should work if I simply have two queues with no masks, otherwise things may get tricky since the number of clients on each pool will impact the overall competition for bandwidth. Is there a way to have two populations sharing a global limiter but client competition is within each population?
You're using HFSC. You can't do both bandwidth shaping and priority shaping, they're fundamentally pretty much exclusive. What you can do is set the bandwidth on each queue and HFSC will make sure each queue gets the correct amount of bandwidth.
For example. At one point I had a 64Kbit/s queue for ICMP traffic with HFSC. Even when P2P traffic was using 99Mb/s of the 100Mb connection, I could get a ping that acted as if the connection was idle because HFSC would always make sure ICMP got 64Kbit/s.
@harvy66 Thanks. Are you suggesting that I adjust the "mask" within the limiter?
Mask: Destination address
IPv4 mask bits: 32 (?)
If I did that, then that sounds to me like it is a limiter pipe per outbound destination - meaning that each outbound connection gets the 10Mb cap. (?) I'll review in the online book for pfSense and read up more on it.