Playing with fq_codel in 2.4
-
Ok a bit more information since I was doing this before going bed and getting tired.
As mentioned before all traffic was been blocked except on one occasion which I will mention in a moment.
I disabled ALTQ, enabled dummynet limiter, disabled the majority of my LAN firewall rules (Since they were to divert to different priority HSFC queues, and so the firewall was vastly simplified, the only LAN rules were pfblockerNG rules, rules to route specific ip's to VPN's and the default outbound LAN rule which I set to use the dummynet in/out pipes.
If I viewed the live limiter stats (the enhanced stats with the patch provided in thread), I always seen an active bucket seemingly processing continuous data, even tho there was only idle network activity, all internet connections which were not already established timed out.
However on one occasion this bucket didnt appear, and the connectivity was working, but as I started a speedtest, it all went to timing out again and this bucket showing continuous data was back.
In addition if I left it like this for a while then the console would start getting flooded with messages like "fq codel enqueue over limit" and maxidx warnings. If I left it further, the kernel panicked.
Running ipfw pipe flush immediately killed the console messages (and also prevented the panic).
Changing the default outbound LAN rule to not use the dummynet pipes immediately restored connectivity. However this did not stop that bucket flow of data, so that occurred even when no firewall rules were routing traffic to dummynet pipes.I think HFSC may have been causing me some issues, so what I have done for now is gone back to ALTQ and FAIRQ for the upstream, but there is no downstream ALTQ active so right now I have no downstream shaping.
I got no idea myself on how to proceed now so unless I get suggestions I wont be trying it again for a while as I wont get anywhere I think. I am curious tho if anyone here who is using it is running 2.4.2?
-
Previously when I last tried this on 2.4.0 dev it at least functioned.
Now trying again on 2.4.2 release it just wont work.
Following the instructions to the latter and even using my backed up config for this which previously functioned, results in all outbound connections timing out as if blocked.
If I set the in/out pipes to use the child queues, connections will work briefly but then timeout after a few seconds.
If I set the in/out pipes to none, everything works but of course the shaper isnt been used.Limiter info page shows the correct information as does ipfw pipe show and ipfw sched show. The issue seems to be PF redirecting traffic properly to ipfw dummynet.
Also of interest on the console I am getting notices saying end of ipfw rules hit and denying packet, which is odd as on dmesg bootup, it shows as the default policy for ipfw set to allow. Which conflicts with the denying packets.
I've got fq_codel working fine on the latest 2.4.2 release using two root limiters and four queues under each. The only algorithm parameters I've tweaked from default are the limit, interval, and target.
Can you show us the output of:
ipfw sched show
ipfw pipe show
ipfw queue showAlong with a screenshot how your limiters/queues are setup? That will help us debug things further.
Hope this helps.
-
Ok I wish I already stored that information, I will go back to this again probably on boxing day and post the information you requested then.
-
I don't have any problem with fq_codel also. I think it can be some package or enabled feature, like captive portal.
-
I'm testing pfSense on my network with an SG-4860. Following the hints here, I've got fq_codel working on my network without anything too unusual happening that could caused by the use of fq_codel (hopefully).
However, the guide from the bufferbloat project mentions a couple of the tuning parameters like "target", "quantum" and "limit". They suggest a "limit" of under 1000, but the limit option appears to be a global limit of the sum of queue lengths of all flows managed under the scheduler: https://github.com/freebsd/freebsd/blob/2589d9ccafc21d29deade87a50261657c27c5700/sys/netpfil/ipfw/dn_sched_fq_codel.c#L328
If I set the limit on fq_codel to 1000 (eg: "sched 1 config pipe 1 type fq_codel target 15 ecn quantum 300 limit 1000"), I get the "kernel: fq_codel_enqueue over limit" and " kernel: fq_codel_enqueue maxidx = XYZ" that a user reported above. This is happening on a 100/5 connection. It could be a problem with my connection, but based on the fact that it spews to the log it seems to be a failure mode. Which direction it's happening on (upload or download) I'm not sure.
-
I'm not entirely familiar with fq_Codel or its implementations, but your quantum should not be lower than your MTU, which is why the default is 1514 for the spec, which is the standard MTU for Ethernet.
edit: I rescind this
-
I'm testing pfSense on my network with an SG-4860. Following the hints here, I've got fq_codel working on my network without anything too unusual happening that could caused by the use of fq_codel (hopefully).
However, the guide from the bufferbloat project mentions a couple of the tuning parameters like "target", "quantum" and "limit". They suggest a "limit" of under 1000, but the limit option appears to be a global limit of the sum of queue lengths of all flows managed under the scheduler: https://github.com/freebsd/freebsd/blob/2589d9ccafc21d29deade87a50261657c27c5700/sys/netpfil/ipfw/dn_sched_fq_codel.c#L328
If I set the limit on fq_codel to 1000 (eg: "sched 1 config pipe 1 type fq_codel target 15 ecn quantum 300 limit 1000"), I get the "kernel: fq_codel_enqueue over limit" and " kernel: fq_codel_enqueue maxidx = XYZ" that a user reported above. This is happening on a 100/5 connection. It could be a problem with my connection, but based on the fact that it spews to the log it seems to be a failure mode. Which direction it's happening on (upload or download) I'm not sure.
Those errors indicate that your queue size (the "limit" parameter) is too small. For the connection speed you have, the value should be higher (especially if you use fq_codel on upload and download traffic). I think the default parameters of the algorithm would work fine in your case, but if you want, you can reduce the queue size a little bit (10240 is quite large), and tweak the quantum (depending on your traffic profile, i.e. smaller vs. larger packets). If performance is not satisfactory, you can also try increasing the target a little (e.g. to 8 or 10ms), but I think the default value of 5ms should work fine for your connection's upload speed.
Here's a link to some documentation:
http://caia.swin.edu.au/freebsd/aqm/patches/README-0.2.1.txt
quantum: is number of bytes a queue can be served before being moved it to the tail of old queues list. Default: 1514 bytes, default can be changed by the sysctl variable net.inet.ip.dummynet.fqcodel.quantum
limit: is the hard limit of all queues size managed by fq_codel schedular instance. Default: 10240 packets, default can be changed by the sysctl variable net.inet.ip.dummynet.fqcodel.limit
Additional info that maybe useful:
https://tools.ietf.org/html/draft-ietf-aqm-fq-codel-06
https://www.reddit.com/r/openbsd/comments/6ttuhn/fq_codel_scheduling/Hope this helps.
-
tman222 is correct about the quantum.
https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/
We have generally settled on a quantum of 300 for usage below 100mbit as this is a good compromise between SFQ and pure DRR behavior that gives smaller packets a boost over larger ones.
-
I setup fq_codel using floating rules on another system and the same IPv4 traceroute/ICMP problem I mentioned earlier occurs.
Anyone else who uses floating rules to match traffic for fq_codel, do you see IPv4 ICMP traceroute working properly?
I see the same (2.4.2-RELEASE-p1)
-
I've got fq_codel working fine on the latest 2.4.2 release using two root limiters and four queues under each. The only algorithm parameters I've tweaked from default are the limit, interval, and target.
Can you show us the output of:
ipfw sched show
ipfw pipe show
ipfw queue showAlong with a screenshot how your limiters/queues are setup? That will help us debug things further.
Hope this helps.
I have the exact same issue as chrcoluk.
ipfw sched show
00001: 450.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 450.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2
ipfw pipe show
00001: 450.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 450.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active
ipfw queue show
q00001 50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
I also noticed that 'ipfw sched show' has 0 buckets active while it appears the correct would be 1 bucket active. Any idea on what I can do here?
-
I've got fq_codel working fine on the latest 2.4.2 release using two root limiters and four queues under each. The only algorithm parameters I've tweaked from default are the limit, interval, and target.
Can you show us the output of:
ipfw sched show
ipfw pipe show
ipfw queue showAlong with a screenshot how your limiters/queues are setup? That will help us debug things further.
Hope this helps.
I have the exact same issue as chrcoluk.
ipfw sched show
00001: 450.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 450.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2
ipfw pipe show
00001: 450.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 450.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active
ipfw queue show
q00001 50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
I also noticed that 'ipfw sched show' has 0 buckets active while it appears the correct would be 1 bucket active. Any idea on what I can do here?
Looking at your setup, can you explain why you chose to have both LAN and WAN queues under each Upload and Download? Do you use all those queues for separate traffic, subnets, etc.? A basic setup actually requires just one queue under each limiter. For example, have a look at post #121:
https://forum.pfsense.org/index.php?topic=126637.msg754199#msg754199
If you try those basic settings such as those, do you experience problems?
Hope this helps.
-
Mine only has one queue for each limiter, and is basically like the post you linked to except with different limits.
I of course have the script to override using fq_codel meaning the GUI setting stops controlling dummynet, but the GUI settings are the baseline with the only adjustment been the switch to fq_codel.
I have not had time yet to retest this so I still have no live ipfw pipe etc. outputs, I wish you would accept my word without me having to paste it all, but I am now glad someone has repeated the problem.
-
Mine only has one queue for each limiter, and is basically like the post you linked to except with different limits.
I of course have the script to override using fq_codel meaning the GUI setting stops controlling dummynet, but the GUI settings are the baseline with the only adjustment been the switch to fq_codel.
I have not had time yet to retest this so I still have no live ipfw pipe etc. outputs, I wish you would accept my word without me having to paste it all, but I am now glad someone has repeated the problem.
Can you remind me of your setup/configuration:
1) What script are you using that you referenced above? What does it do? I don't recall having to setup any scripts - only Shellcmd to make sure the fq_codel starts automatically after reach reboot.
2) Do you have your queues applied to your outgoing traffic LAN rule? Or in a traffic matching rule on your WAN interface? If the latter, could you show us the configuration?Hope this helps.
-
1 - Not a script actually, just creating the /root/rules.limiter file and editing the shaper.inc code to use it.
As instructed by the OP of this thread.
2 - The pipe is configured in the LAN rules section (outgoing).
Configuration same as the screenshot you linked to in your previous post.
-
Hi,
I have an ADSL asymmetrical connection (5,3 Mbps down, 880 Kbps up at 95 % of max speed, with 25 ms ping).
I played with the settings and I got an A on the dslreports bufferbloat test. I just wanted to share my settings with you guys to discuss it and tell me if these are the most optimal for my connection, as there are no real guide on the Internet to tweak these parameters for low rate asymmetrical DSL connection.
Download queue
ipfw sched 1 config pipe 1 type fq_codel target 5 interval 100 quantum 300 limit 325
Upload queue
ipfw sched 2 config pipe 2 type fq_codel target 26 interval 208 noecn quantum 300 limit 55
-
Interval is meant to represent the RTT(ping) of most of your connections. Typically 100ms is ballpark correct for most users. If most of the services you're connecting to are more than 100ms away, then increase it.
-
Interval is meant to represent the RTT(ping) of most of your connections. Typically 100ms is ballpark correct for most users. If most of the services you're connecting to are more than 100ms away, then increase it.
So let's say I have 25 ms ping and most of the services I connect to are in the order or 30 ms, I can safely put an interval of 30 ms for both the download and upload queues?
And what about the target?
-
The target should be at least 1.5x the serialization delay of your MTU relative to your bandwidth. For example. A 1500mtu is 12,000 bits times 1.5 is 18,000 bits divided by 1Mbit/s is 18ms. In this case, your target should be at least 18, but not much higher. This does not scale linearly. If you have 10Mbit/s of bandwidth, that doesn't mean you want a 1.8ms target. Generally 5ms is good and is the default. The 5ms default works well from 10Mbit/s all the way to 10Gbit/s. But not to say it is the best for a given bandwidth.
This is how target and interval are used.
Target is used to determine if a packet has been in the queue for too long. Codel/fq_Codel timestamp when a packet gets enqueued and checks how long the packet has been in the queue when it gets dequeued. If the packet has been in the queue for too long and the queue is not in a drop mode, it will drop/discard the packet. This is where interval comes in. The packets behind that packet are probably as old or older than the packet that just got dropped. But even though those packets are older than the target, Codel will not drop them until at least an interval amount of time has passed. For the next interval of time, all packets will continue to get dequeued as normal.
If before the interval is reached, the queue sees a timestamp of less than the target, the queue will leave drop mode as the latency has come down. If by the next interval no packet is seen with a timestamp of less than the target, the queue will drop/discard another packet. Now it will reduce the interval by some non-linear scaling factor, like square-root or something in that ballpark. Rinse and repeat. Keep dequeuing packets until a packet is below the target or the interval is reached again. If the interval is reached, drop the current packet and reduce the interval.
The reason the interval is the RTT is because a sender cannot respond faster to a dropped packet than the RTT time. Codel doesn't want to just start dropping a large burst of packets to keep latency low. That would kill bandwidth. It just wants to drop a single packet that is statistically likely to be related to one of the few heavy flows that are clogging the pipe, then waiting an RTT amount of time to see if that fixed the issue. If latency is still high, drop another packet and become more aggressive by reducing the amount of time before it drops again.
This allows for a high burst of packets to move through the queue without dropping most of time, but at the same time quickly attacks any flows that attempt to fill up the queue and keep it full. This keeps the queue very short.
The only difference with fq_Codel is it has these queues per bucket, and a default of 1024 buckets. It hashes a packet, which causes all packets for a given flow to land in the same bucket, but flows are randomly distributed among all of the buckets. Coupled with a DDR algorithm that tries to fairly distribute time among the buckets, driven by the quantum value, fq_Codel tends to isolate heavy traffic from the lighter traffic, and even where a light flow shared a bucket with a heavy flow, the latency is kept low and the heavy flow statistically is more likely to have its packets dropped.
Cake uses "ways" to keep perfect isolation of multiple flows sharing a same bucket, but that's another discussion for something that isn't even complete yet.
-
playing with this again
root@PFSENSE ~ # ipfw pipe show 00001: 17.987 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 9.200 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active root@PFSENSE ~ # ipfw sched show 00001: 17.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 9.200 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 root@PFSENSE ~ # ipfw queue show q00001 50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
after routing on default lan outbound rule
root@PFSENSE ~ # ipfw pipe show 00001: 17.987 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 9.200 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active root@PFSENSE ~ # ipfw sched show 00001: 17.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 9.200 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 109248 9956265 27 3497 9 root@PFSENSE ~ # ipfw queue show q00001 50 sl. 0 flows (16 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
This rises rapidly even tho connection idle
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 109248 9956265 27 3497 9
console full of
fq_codel_enqueue over limit
and
fq_codel_enqueue maxidx = <random 3="" 400="" 525="" digits="" usually="" between="" and="">screenshot of relevant part of rule attaching to postFinally if it helps, connectivity doesnt die if the traffic is only light, but any type of bulk download, streaming, speedtest will make "everything" timeout until rules are reloaded or ipfw is flushed or outbound rule is configured to not route to dummynet pipes.
</random> -
playing with this again
root@PFSENSE ~ # ipfw pipe show 00001: 17.987 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 9.200 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active root@PFSENSE ~ # ipfw sched show 00001: 17.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 9.200 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 root@PFSENSE ~ # ipfw queue show q00001 50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
after routing on default lan outbound rule
root@PFSENSE ~ # ipfw pipe show 00001: 17.987 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 9.200 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active root@PFSENSE ~ # ipfw sched show 00001: 17.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 9.200 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 109248 9956265 27 3497 9 root@PFSENSE ~ # ipfw queue show q00001 50 sl. 0 flows (16 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
This rises rapidly even tho connection idle
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 109248 9956265 27 3497 9
console full of
fq_codel_enqueue over limit
and
fq_codel_enqueue maxidx = <random 3="" 400="" 525="" digits="" usually="" between="" and="">screenshot of relevant part of rule attaching to postFinally if it helps, connectivity doesnt die if the traffic is only light, but any type of bulk download, streaming, speedtest will make "everything" timeout until rules are reloaded or ipfw is flushed or outbound rule is configured to not route to dummynet pipes.</random>
Those errors usually indicate that the queue size (the limit parameter) is too small. However, since yours is already very large, something else must be misconfigured for the queue to fill up and run out of space. It almost seems to me that the queue is filling and not enough traffic is passing through it (for instance if the limiters were not properly configured or there is a rule blocking the traffic).
Can you please also show us a screenshot of your limiters and queue configuration (apologies if you already posted this before, but I couldn't find it)?
Thanks.