Playing with fq_codel in 2.4
-
Ok I see.
My point was that you made it sound like only opensense offered this feature, which is incorrect.
Ah, ok as that wasn't my point. I just wanted to share that both the FQ-Codel and HFSC/Codel work well when configured right and my findings with quite a bit of testing was that FQ-Codel was more efficient but not by much and I had working results with both.
-
My bad, my bad! It was a really late couple of nights haha.
-
Hi guys,
Have been following the discussion on how to setup weights on the queues. Wanted to go through an example to make sure I understand correctly:
Let's assume I have 3 subnets (LAN1 - 3) and one guest network. I'd like to make sure that when under load, no LAN (or guest network) can hog all the bandwidth.
To set this up with limiters, I would:
Create an upload and download limiter and then create under each:
Download: Create 4 queues (one for each subnet with weight 30, and one for the guest network with weight 10)
Upload: Create 4 queues (one for each subnet with weight 30, and one for the guest network with weight 10).Assuming I had a 100/100 connection, this would ensure that:
With no load, any of the subnets including guest network could consume up to 100Mbit.
Assuming the connection is maxed out, this will ensure the that LAN1 - 3 are limited to 30Mbit each, and the guest network is limited 10Mbit each.In the situation where e.g. only LAN1 and LAN2 are trying to use all the bandwidth, how would it work (i.e. not traffic on LAN 3 and guest network)? Depending on on which subnet started using the bandwidth first, is either able to go up to 70Mbit as the other is guaranteed at least 30Mbit?
Thanks in advance for your help and explanation I really appreciate it.
-
You got it, in the lan 1 and 2 only scenario it would go to 50 Mbps for each since they are weighted equally.
The speeds will certainly have transient periods of assymetric throughput but will balance out.
-
You got it, in the lan 1 and 2 only scenario it would go to 50 Mbps for each since they are weighted equally.
The speeds will certainly have transient periods of assymetric throughput but will balance out.
Thanks! I configured everything as described and was able to test it out by running a speed test on the three LAN's concurrently. Was a nice to see speeds adjusting so that every LAN got its faire share as determined by the weights, yet if the other two LAN's are busy the third LAN could still use all the bandwidth.
Thanks again for the help - I think it's great how with proper traffic shaping one can really get the most out of a lower bandwidth connection, e.g. 50/50 or 75/75 will go a long way with proper shaping vs. spending extra $ to upgrade to more bandwidth to try to solve the problem.
-
The problem I'm having with fq_codel is shaping OpenVPN. It's not clear how best to apply fq_codel to OpenVPN for my setup.
There are two options here.-
Apply fq_codel to the WAN firewall rule for OpenVPN. This works well for site-to-site VPNs. If I send highly-compressable data, then the LZ4 compression works and I get a higher throughput. Uncompressable data is shaped normally and works well. This doesn't work well for a road-warrior connection. When the road-warrior accesses the Internet, that traffic is not handled by fq_codel. If it saturates the link then it's like not have fq_codel at all.
-
Apply fq_codel to the OpenVPN interface firewall rules. This breaks compression apparently as I couldn't get rates that exceeded the limiter speed.
With the old codelq applied to WAN, it didn't seem to matter what I did, as it would always do a pretty good job of keeping latency under control with/without OpenVPN, highly-compressable data, etc. fq_codel does a better job but having to apply it to every firewall rule is a bit of configuration tangle.
*Applying fq_codel to the WAN firewall rule for OpenVPN and sending highly-compressable data does introduce a lot of latency for me but still ok. It's much worse without fq_codel.
For reference:
Idle: 8ms, regular upstream saturation with fq_codel: 12-18ms, highly-compressable upstream saturation: 100ms, no fq_codel/codel upstream saturation: 1500ms. -
-
I have two queues created under the "download" limiter and they show up in Limiter Info, but when I create the schedule only one queue gets added…
Does the command "ipfw sched 1 config pipe 1 type fq_codel" need to be modified to tell it to include all queues? I'm trying to add a lower weight to the guest network.
edit: one other observation, I followed the screenshots from post 121 but I needed to set the mask to match my subnets or multiple clients were clashing and still causing buffer bloat.
![Screen Shot 2017-10-28 at 10.40.24 PM.png](/public/imported_attachments/1/Screen Shot 2017-10-28 at 10.40.24 PM.png)
![Screen Shot 2017-10-28 at 10.40.24 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2017-10-28 at 10.40.24 PM.png_thumb)
![Screen Shot 2017-10-28 at 10.40.08 PM.png](/public/imported_attachments/1/Screen Shot 2017-10-28 at 10.40.08 PM.png)
![Screen Shot 2017-10-28 at 10.40.08 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2017-10-28 at 10.40.08 PM.png_thumb) -
I really don't get much difference. I was using OPNSense and fq_codel prior as it seemed to just work better for me.
With the new release, I changed back and just use HFSC queues with codel checked and some very basic rules to make sure my gaming traffic is first and my non important (downloads for media and other odd plex related download stuff) is limited. Works like a champ.
Only thing for me always comes back to making sure my upload and download limits match close to reality what I expect out of my link so I use 940 down and 880 on Verizon's Gigabit FIOS with 1000 queue. No drops and no bufferbloat that I've been able to make happen.
I have been using ALTQ FAIRQ + Codel Active Queue Management on my 150/150 link along with the queue set to 1024 in the child queue. My question is, does it make more sense to set the queue in the Codel child or the FAIRQ parent? Will I see a performance difference?
-
I have been using ALTQ FAIRQ + Codel Active Queue Management on my 150/150 link along with the queue set to 1024 in the child queue. My question is, does it make more sense to set the queue in the Codel child or the FAIRQ parent? Will I see a performance difference?
I've stuck with HFSC and codel on the child queues with a queue limit of 1000. Works perfect for me. I use a very simplistic setup as I only have a high, default, low queue and gaming/voip is high and my download/sync traffic is low. Everything is just defaults.
-
Hello,
I try to configure the weight on the queus with fq_codel.
I have a LAN, a WAN and a VPN (ipsec), I want to put a higher weight on the VPN (weight 90) and on my WAN a lower weight (weight 10).
I use floating rules on the LAN interface to match my flows.My limiters :
- Download : 1800kbit/s
- Download_LOW : weight 10 (mask /24 with "destination addresses")
- Download_HIGH : weight 90 (mask /24 with "destination addresses")
- Upload : 1800kbit/s
- Upload_LOW : weight de 10 (mask /24 with "source addresses")
- Upload_HIGH : weight de 90 (mask /24 with "source addresses")
Here is my /root/rules.limiter file:
pipe 1 config bw 1800Kb sched 1 config pipe 1 type fq_codel queue 1 config pipe 1 weight 10 mask dst-ip6 /128 dst-ip 0xffffff00 queue 2 config pipe 1 weight 90 mask dst-ip6 /128 dst-ip 0xffffff00 pipe 2 config bw 1800Kb sched 2 config pipe 2 type fq_codel queue 3 config pipe 2 weight 10 mask src-ip6 /128 src-ip 0xffffff00 queue 4 config pipe 2 weight 90 mask src-ip6 /128 src-ip 0xffffff00
It works without fq_codel but with fq_codel, limiters work but not weight on queues, my queues are not used.
When I do a "ipfw sched show", I do not understand why my queue 2 is present in the second limiter.
# ipfw sched show 00001: 1.800 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 2 1 00002: 1.800 Mbit/s 0 ms burst 0 [color]q00002 50 sl. 0 flows (256 buckets) sched 1 weight 90 lmax 0 pri 0 droptail[/color] mask: 0x00 0x00000000/0x0000 -> 0xffffff00/0x0000 sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 4 3 # ipfw queue show q00001 50 sl. 0 flows (256 buckets) sched 1 weight 10 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffff00/0x0000 q00002 50 sl. 0 flows (256 buckets) sched 1 weight 90 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffff00/0x0000 q00003 50 sl. 0 flows (256 buckets) sched 2 weight 10 lmax 0 pri 0 droptail mask: 0x00 0xffffff00/0x0000 -> 0x00000000/0x0000 q00004 50 sl. 0 flows (256 buckets) sched 2 weight 90 lmax 0 pri 0 droptail mask: 0x00 0xffffff00/0x0000 -> 0x00000000/0x0000
What is wrong ? Thanks for your help
-
I believe I have everything setup per post #120 and beyond. My output for ipfw sched show looks correct, shellcmd is all set, router rebooted. I'm testing on a network where no other traffic is going on except my computer. My buffer bloat for downloading on dlsreports has improved greatly (300+ down to 51ms avg) but I can't seem to get rid of the bufferbloat for the upload (value slides between 300-1000ms depending on the bandwidth value I select for the limiter). My connection is slow by comparison to most of the folks I've seen in thread (15 down / 1 up) - I'm just wondering if I will ever be able to completely dial out the bufferbloat on a slow link like mine or do I just need to keep experimenting with different bandwidth values.
-
I believe I have everything setup per post #120 and beyond. My output for ipfw sched show looks correct, shellcmd is all set, router rebooted. I'm testing on a network where no other traffic is going on except my computer. My buffer bloat for downloading on dlsreports has improved greatly (300+ down to 51ms avg) but I can't seem to get rid of the bufferbloat for the upload (value slides between 300-1000ms depending on the bandwidth value I select for the limiter). My connection is slow by comparison to most of the folks I've seen in thread (15 down / 1 up) - I'm just wondering if I will ever be able to completely dial out the bufferbloat on a slow link like mine or do I just need to keep experimenting with different bandwidth values.
You should be able to. You are sacrificing some bandwidth cap so you don't get bloat. It's just finding that magical number for your connection. If you can't, your provider might be a little more sporadic than you think.
-
I believe I have everything setup per post #120 and beyond. My output for ipfw sched show looks correct, shellcmd is all set, router rebooted. I'm testing on a network where no other traffic is going on except my computer. My buffer bloat for downloading on dlsreports has improved greatly (300+ down to 51ms avg) but I can't seem to get rid of the bufferbloat for the upload (value slides between 300-1000ms depending on the bandwidth value I select for the limiter). My connection is slow by comparison to most of the folks I've seen in thread (15 down / 1 up) - I'm just wondering if I will ever be able to completely dial out the bufferbloat on a slow link like mine or do I just need to keep experimenting with different bandwidth values.
You might have some trouble just getting it to work. I can't find where I read it but the codel algorithm has trouble working on connections at or below 1Mbps because the transmission time of an MTU sized frame is too close to the delay that codel uses for its inner-workings.
In this link they talk about using codel with what they call really low speeds. I don't know what any of it means so good luck.
https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/#tuning-codel-for-circumstances-it-wasn-t-designed-for -
You could get some benefit FairQ.
-
This has to be the best thread on the forum right now. Helped me a lot.
-
Noob question here. I installed the Shellcmd package with hopes of having the following command ran at boot:
ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel
I searched this forum and Googled, but could not find out how to use the Shellcmd package. I could not locate any new options in the Web UI once the package was installed. What am I missing?
-
Noob question here. I installed the Shellcmd package with hopes of having the following command ran at boot:
ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel
I searched this forum and Googled, but could not find out how to use the Shellcmd package. I could not locate any new options in the Web UI once the package was installed. What am I missing?
Refresh your browser window and under Services->Shell Command
-
Noob question here. I installed the Shellcmd package with hopes of having the following command ran at boot:
ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel
I searched this forum and Googled, but could not find out how to use the Shellcmd package. I could not locate any new options in the Web UI once the package was installed. What am I missing?
Refresh your browser window and under Services->Shell Command
Doh. That's the one thing I didn't try. Many thanks!
-
Hi
It is possible to use queues (different weights) with fq_codel ? -
So I have a pretty odd situation with fq_codel.
I created a floating rule:Action - Match Interface - WAN Direction - out Address Family - IPv4 Protocol - Any In / Out pipe - lan wan
Now, when I do traceroute from a LAN machine, all the hops show up as the destination! If I disable this floating rule, then traceroute works normally. If it's enabled and I do traceroute from pfsense itself, that traceroute is fine. ICMP ping itself seems oddly unreliable because every first ping is lost.
I have a regular firewall rule to allow IPv4 ICMP to the WAN address but disabling/enabling this does nothing. I also do manual outbound NAT but just the regular rules.
What's causing this? As soon as I disable the out floating rule, traceroute works normally. The bad traceroute even has latency corresponding to what the hops would normally be. traceroute through a site-to-site VPN works normally (no floating rules for VPN).
IPv6 traceroute is unaffected.
Does anyone else who uses a floating rule like mine see this? Any known solutions?
A LAN machine example traceroute:
# traceroute -I forum.pfsense.org traceroute to forum.pfsense.org (208.123.73.70), 30 hops max, 60 byte packets 1 uac.localdomain (192.168.112.1) 0.163 ms 0.143 ms 0.138 ms 2 * 208.123.73.70 (208.123.73.70) 14.600 ms 15.180 ms 3 208.123.73.70 (208.123.73.70) 15.136 ms 15.379 ms 15.528 ms 4 * * * 5 208.123.73.70 (208.123.73.70) 16.811 ms 16.837 ms 16.848 ms 6 208.123.73.70 (208.123.73.70) 17.816 ms 16.261 ms 17.627 ms 7 208.123.73.70 (208.123.73.70) 17.644 ms 19.906 ms 20.173 ms 8 208.123.73.70 (208.123.73.70) 20.262 ms 19.962 ms 19.813 ms 9 208.123.73.70 (208.123.73.70) 19.594 ms 19.867 ms 19.898 ms 10 208.123.73.70 (208.123.73.70) 60.034 ms 59.998 ms 60.353 ms 11 208.123.73.70 (208.123.73.70) 59.594 ms 55.994 ms 55.201 ms 12 208.123.73.70 (208.123.73.70) 55.441 ms 55.469 ms 56.421 ms 13 208.123.73.70 (208.123.73.70) 55.350 ms 57.401 ms 57.401 ms 14 208.123.73.70 (208.123.73.70) 54.101 ms 55.283 ms 62.990 ms 15 208.123.73.70 (208.123.73.70) 62.392 ms 62.218 ms * 16 * * * 17 * * * 18 208.123.73.70 (208.123.73.70) 61.501 ms 62.231 ms 59.980 ms 19 208.123.73.70 (208.123.73.70) 67.374 ms 68.414 ms 68.897 ms 20 208.123.73.70 (208.123.73.70) 63.797 ms 74.896 ms 70.074 ms
The same traceroute from pfsense itself:
traceroute -I forum.pfsense.org traceroute to forum.pfsense.org (208.123.73.70), 64 hops max, 48 byte packets 1 173-228-88-1.dsl.dynamic.fusionbroadband.com (173.228.88.1) 14.262 ms 20.740 ms 17.791 ms 2 gig1-29.cr1.lsatca11.sonic.net (70.36.243.77) 17.422 ms 21.650 ms 14.505 ms 3 * * * 4 50.ae4.gw.pao1.sonic.net (50.0.2.5) 15.070 ms 21.048 ms 19.681 ms 5 ae6-102.cr1-pao1.ip4.gtt.net (69.22.130.85) 14.544 ms 13.726 ms 21.730 ms 6 xe-8-1-6.cr0-sjc1.ip4.gtt.net (89.149.142.18) 16.210 ms 14.649 ms 14.841 ms 7 as6461.ip4.gtt.net (216.221.158.110) 18.493 ms 18.531 ms 20.437 ms 8 ae16.cr1.sjc2.us.zip.zayo.com (64.125.31.12) 22.334 ms 20.133 ms 20.252 ms 9 ae27.cs1.sjc2.us.eth.zayo.com (64.125.30.230) 60.536 ms 79.274 ms 54.876 ms 10 ae2.cs1.lax112.us.eth.zayo.com (64.125.28.145) 55.359 ms 57.712 ms 63.770 ms 11 ae3.cs1.dfw2.us.eth.zayo.com (64.125.29.52) 59.466 ms 64.721 ms 63.456 ms 12 ae27.cr1.dfw2.us.zip.zayo.com (64.125.30.181) 60.663 ms 63.960 ms 59.337 ms 13 ae11.er1.dfw2.us.zip.zayo.com (64.125.20.66) 58.965 ms 64.998 ms 61.659 ms 14 ae8.er2.dfw2.us.zip.zayo.com (64.125.29.122) 64.904 ms 60.535 ms 59.003 ms 15 te-6-1-aus-core-11.zip.zayo.com (64.125.32.202) 65.484 ms 63.259 ms 63.947 ms 16 net64-20-229-170.static-customer.corenap.com (64.20.229.170) 65.048 ms 61.145 ms 59.658 ms 17 gw1.netgate.com (66.219.34.173) 67.752 ms 63.029 ms 63.543 ms 18 fw2.pfmechanics.com (208.123.73.4) 66.281 ms 68.793 ms 67.865 ms 19 208.123.73.70 (208.123.73.70) 66.834 ms 66.319 ms 66.047 ms