Playing with fq_codel in 2.4
-
@tman222 My pfSense is a VM on Hyper-V and I'm testing from the host through the 10Gb Hyper-V interfaces through an Intel I340 Gb card hosting the v-switches via Cat6 to the modem. It's the only wired machine I have and no diff with other browsers. Results are completely random. Thanks for the help, but I'm not thinking it's worth the effort.
-
@tman222 said in Playing with fq_codel in 2.4:
@wgstarks - what are your fq-codel parameters set to? One thing you might try is increasing the the value for the limit parameter. Here is a a link to some good documentation on what each parameter does:
http://caia.swin.edu.au/freebsd/aqm/downloads.html
Hope this helps.Limiters: 00001: 25.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 400.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active Schedulers: 00001: 25.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 NoECN Children flowsets: 1 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 1 1390 0 0 0 00002: 400.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 NoECN Children flowsets: 2 0 ip 0.0.0.0/0 0.0.0.0/0 1 90 0 0 0 Queues: q00001 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail q00002 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
These settings were based on speedtest results.
Thanks for the link. I'll check it out. -
@tman222 said in Playing with fq_codel in 2.4:
@wgstarks - what are your fq-codel parameters set to? One thing you might try is increasing the the value for the limit parameter.
The limit was set at default 10240 packets. I increased that to 10340, but I'm wondering if that is too small to make any difference. Should I try a larger increase?
-
@wgstarks said in Playing with fq_codel in 2.4:
@tman222 said in Playing with fq_codel in 2.4:
@wgstarks - what are your fq-codel parameters set to? One thing you might try is increasing the the value for the limit parameter.
The limit was set at default 10240 packets. I increased that to 10340, but I'm wondering if that is too small to make any difference. Should I try a larger increase?
When I saw these messages I ended up doubling the limit value from 10240 to 20480. That might be over-compensating somewhat, but thankfully I have not had any issues since. Hope this helps.
-
@jasonraymundo31 I'll give it a try.
You can see what I meant regarding having different limiters per WAN connection, and a single queue inside each limiter. In the second picture you can also see the use of a floating rule per IPv4 or IPv6 version of each WAN connection. In this instance my IPv6 is provided by Hurricane Electric, and is relatively irrelevant in this matter as it's so rarely used.
-
Have applied settings @uptownVagrant described in post of 27Nov. Running a traceroute (on iMac) I get unexpected results as others have posted.
traceroute google.com traceroute to google.com (172.217.5.110), 64 hops max, 52 byte packets 1 pfsense.firewall.localdomain (192.168.10.1) 0.531 ms 0.247 ms 0.224 ms 2 sfo03s07-in-f110.1e100.net (172.217.5.110) 0.942 ms 0.838 ms 0.906 ms 3 sfo03s07-in-f110.1e100.net (172.217.5.110) 5.972 ms 9.392 ms 7.845 ms <snip> 11 sfo03s07-in-f110.1e100.net (172.217.5.110) 9.272 ms 8.283 ms 8.661 ms
With Floating Rules disabled it works normally
traceroute google.com traceroute to google.com (172.217.5.110), 64 hops max, 52 byte packets 1 pfsense.firewall.localdomain (192.168.10.1) 0.389 ms 0.156 ms 0.243 ms 2 192.168.1.254 (192.168.1.254) 0.815 ms 0.810 ms 0.733 ms 3 <snip> 9 * * * 10 108.170.237.106 (108.170.237.106) 8.826 ms 72.14.235.2 (72.14.235.2) 9.178 ms 74.125.252.150 (74.125.252.150) 8.790 ms 11 108.170.236.61 (108.170.236.61) 8.752 ms sfo03s07-in-f110.1e100.net (172.217.5.110) 8.728 ms 108.170.236.61 (108.170.236.61) 8.469 ms
I think my limiters & rules are the same, EXCEPT I use pfBlockerNG and it has rules at the TOP of Floating.
Limiter:
Limiters: 00001: 838.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 910.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active Schedulers: 00001: 838.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 4096 NoECN Children flowsets: 1 00002: 910.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 4096 NoECN Children flowsets: 2 Queues: q00001 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail q00002 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
pfctl -vvsr | grep “Codel”
@124(1566879036) pass out quick on igb0 reply-to (igb0 x.x.x.1) inet proto icmp all icmp-type trace keep state label "USER_RULE: work around for fq_Codel limiter"
@125(1566882242) pass quick on igb0 inet proto icmp all icmp-type echorep keep state label "USER_RULE: work around for fq_Codel limiter"
@126(1566882242) pass quick on igb0 inet proto icmp all icmp-type echoreq keep state label "USER_RULE: work around for fq_Codel limiter"
@127(1566882594) match in on igb0 inet all label "USER_RULE: No Improvement in Buffer Bloat: WAN in Codel limi..." dnqueue(1, 2)
@128(1566795208) match out on igb0 inet all label "USER_RULE: No Improvement in Buffer Bloat: WAN out Codel lim..." dnqueue(2, 1)/tmp/rules.limiter
pipe 1 config bw 838Mb droptail sched 1 config pipe 1 type fq_codel target 5ms interval 100ms quantum 300 limit 10240 flows 4096 noecn queue 1 config pipe 1 droptail pipe 2 config bw 910Mb droptail sched 2 config pipe 2 type fq_codel target 5ms interval 100ms quantum 300 limit 10240 flows 4096 noecn queue 2 config pipe 2 droptail
Any ideas of why I still have incorrect traceroute?
-
@JonH I have the same problem, and after reading 600 more posts in this topic, I believe I have the answer for you.
You're using a traceroute that uses UDP by default, and you're shaping TCP and UDP, and this is a bug in pfSense.
You can work around it by using ICMP for traceroutes, e.g. (disclaimer: I'm using Linux):
alias traceroute='traceroute -I'
Some here have mentioned that you may also be able to work around it by applying the limits on LAN rules, not floating rules, however the alias is good enough for me for now, so I stopped reading at around 600 posts and can't show you what to do there :)
-
@forbiddenlake Thanks for this info. I backed out of fq_codel a couple of months ago but may revisit it using the info you provided.
-
What I don't understand is that with no limiters and QOS disciplines not enabled, my traceroutes are still not working. Where else could there be an issue? I don't need qos now with gigabit fiber.
-
@forbiddenlake said in Playing with fq_codel in 2.4:
@JonH I have the same problem, and after reading 600 more posts in this topic, I believe I have the answer for you.
You're using a traceroute that uses UDP by default, and you're shaping TCP and UDP, and this is a bug in pfSense.
You can work around it by using ICMP for traceroutes, e.g. (disclaimer: I'm using Linux):
alias traceroute='traceroute -I'
Some here have mentioned that you may also be able to work around it by applying the limits on LAN rules, not floating rules, however the alias is good enough for me for now, so I stopped reading at around 600 posts and can't show you what to do there :)
Hi, you have details of this bug? thanks.
-
@chrcoluk I believe this "bug" is being referenced. Certain configuration will cause pfSense to not decrease the TTL when forwarding. Policy routing is used with direction=out limiters, so it's a common cause of the behavior folks are seeing in this thread where traceroute doesn't work. If you're using configuration similar to this, there is a provision for ICMP traceroute but if you are using a traceroute program that is using UDP packets then those packets would use policy routing and the TTL would not decrease on those packets at pfSense.
-
This post is deleted! -
@robnitro said in Playing with fq_codel in 2.4:
What I don't understand is that with no limiters and QOS disciplines not enabled, my traceroutes are still not working. Where else could there be an issue? I don't need qos now with gigabit fiber.
The ISP could be the issue. I know Verizon has issues with traceroutes not showing properly in some areas, essentially showing your router then the destination host in a 2-hop traceroute (or more if you have multiple routers between you and Verizon).
This thread on DSLReports shows it starting back in late 2018, but still noted as happening in August this year in the thread. I'm still seeing the issue though. Has VZ disabled TTL propagation?
-
Hi guys an update from myself.
I did some more messing around with my limiters and changed my main pipe to this.
FQ_PIE target 5ms tupdate 15ms alpha 0.125 beta 1.25 max_burst 150ms max_ecnth 0.1 quantum 300 limit 1000 flows 1024 ECN CapDrop TS Derand
also
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms NoECN
So I ditched droptail.
Now on downstream congestion it performs "way" better. I do still have to provide a sizely overhead for it to not affect latency, but on FQ_CODEL with droptail I needed to supply a massive 50-60% overhead, on this new configuration 2% isnt enough but it seems 12% is. I have yet to try anything between 2% and 12% to see how low I Can get it, but already 12% I consider a massive improvement. :)
Also the masking is set to on src /24 not dest.
-
@w0w said in Playing with fq_codel in 2.4:
both are on the same LAN
I have had the same issues with pf .
FQ_Codel in 2.4.4 doesnt work with floating rules.
Only works via gui as a lan limiter with children and weighted subqueues and even so tcp and udp traffic (udp voip) still experiences spikes under tcp load. ; Added that I have udp voip sub-queues (weighted) under the parent limiter,if not used like this fq_codel and fq_pie (with no interface shaping) its a mess.
On top of this traffic shapers on the interfaces always hinders the floating rule method so I have disabled traffic shping on the interfaces as per the linux method.
What is odd that fq_codel actually works with all ipv4 traffic on debian with all protocols very well,when applied to the wan i/f.
With freebsd for some reason fq_pie only seems to work with udp packets without shaping all ipv4???,whereas fq_codel with altq only works with tcp? as per the original codel implementation.
Hoping smart queuing works soon in the distribution as fq-codel does not perform the same as linux by far.
Anyone else had this issue ?.
-
@m8ee how does your rules look like? I have not had any problems getting fq-CoDel to work in either 2.4.4 or 2.4.5 with limiters and floating rules.
-
Hi all,
I'm a new PFsense user here.
I set up my FQ_Codel in my fresh install as per https://www.youtube.com/watch?v=o8nL81DzTlU
However I get a bunch of flowset errors in my syslog any ideas? I've been reading it's a bug but these messages are from 2017... and I'm unsure if they're still a thing in 2020? I'm currently at work on break and decided to chip in my concern.
Is the video guide maybe outdated? Does anyone have the 2020 version?
Thanks.
-
@Zeny001 try setting queue management algorithm under the queues to Tail drop and see if that helps.
-
Wow. That did it. From C bufferbloat on DSLreports to A+.
Thanks a bunch. For anyone having trouble remember to uncheck ECN since tail drop does not support it
I lost about 300mbps of bandwidth though.
I have a gigabit connection and I'm getting about 600mbps now, was getting 900ish before. I dont really care though, but if anyones got any tips let me know :)
-
@Zeny001 you could try lowering limit by a factor of ten and increase flows by the same and see if that makes a difference.