Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Playing with fq_codel in 2.4

    Scheduled Pinned Locked Moved Traffic Shaping
    1.1k Posts 123 Posters 1.5m Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T
      tman222
      last edited by

      @azuisleet:

      I'm testing pfSense on my network with an SG-4860. Following the hints here, I've got fq_codel working on my network without anything too unusual happening that could caused by the use of fq_codel (hopefully).

      However, the guide from the bufferbloat project mentions a couple of the tuning parameters like "target", "quantum" and "limit". They suggest a "limit" of under 1000, but the limit option appears to be a global limit of the sum of queue lengths of all flows managed under the scheduler: https://github.com/freebsd/freebsd/blob/2589d9ccafc21d29deade87a50261657c27c5700/sys/netpfil/ipfw/dn_sched_fq_codel.c#L328

      If I set the limit on fq_codel to 1000 (eg: "sched 1 config pipe 1 type fq_codel target 15 ecn quantum 300 limit 1000"), I get the "kernel: fq_codel_enqueue over limit" and " kernel: fq_codel_enqueue maxidx = XYZ" that a user reported above. This is happening on a 100/5 connection. It could be a problem with my connection, but based on the fact that it spews to the log it seems to be a failure mode. Which direction it's happening on (upload or download) I'm not sure.

      Those errors indicate that your queue size (the "limit" parameter) is too small.  For the connection speed you have, the value should be higher (especially if you use fq_codel on upload and download traffic).  I think the default parameters of the algorithm would work fine in your case, but if you want, you can reduce the queue size a little bit (10240 is quite large), and tweak the quantum (depending on your traffic profile, i.e. smaller vs. larger packets).  If performance is not satisfactory, you can also try increasing the target a little (e.g. to 8 or 10ms), but I think the default value of 5ms should work fine for your connection's upload speed.

      Here's a link to some documentation:

      http://caia.swin.edu.au/freebsd/aqm/patches/README-0.2.1.txt

      quantum:
          is number of bytes a queue can be served before being moved it
          to the tail of old queues list. Default: 1514 bytes, default can be
          changed by the sysctl variable net.inet.ip.dummynet.fqcodel.quantum
      
      limit:
          is the hard limit of all queues size managed by fq_codel schedular
          instance. Default: 10240 packets, default can be changed by the 
          sysctl variable net.inet.ip.dummynet.fqcodel.limit
      

      Additional info that maybe useful:
      https://tools.ietf.org/html/draft-ietf-aqm-fq-codel-06
      https://www.reddit.com/r/openbsd/comments/6ttuhn/fq_codel_scheduling/

      Hope this helps.

      1 Reply Last reply Reply Quote 0
      • H
        Harvy66
        last edited by

        tman222 is correct about the quantum.

        https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/

        We have generally settled on a quantum of 300 for usage below 100mbit as this is a good compromise between SFQ and pure DRR behavior that gives smaller packets a boost over larger ones.

        1 Reply Last reply Reply Quote 0
        • forbiddenlakeF
          forbiddenlake
          last edited by

          @darkcrucible:

          I setup fq_codel using floating rules on another system and the same IPv4 traceroute/ICMP problem I mentioned earlier occurs.

          Anyone else who uses floating rules to match traffic for fq_codel, do you see IPv4 ICMP traceroute working properly?

          I see the same (2.4.2-RELEASE-p1)

          1 Reply Last reply Reply Quote 0
          • L
            lukezamboni
            last edited by

            @tman222:

            I've got fq_codel working fine on the latest 2.4.2 release using two root limiters and four queues under each.  The only algorithm parameters I've tweaked from default are the limit, interval, and target.

            Can you show us the output of:

            ipfw sched show
            ipfw pipe show
            ipfw queue show

            Along with a screenshot how your limiters/queues are setup?  That will help us debug things further.

            Hope this helps.

            I have the exact same issue as chrcoluk.

            ipfw sched show

            
            00001: 450.000 Mbit/s    0 ms burst 0
            q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
             sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
             FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
               Children flowsets: 1
            00002: 450.000 Mbit/s    0 ms burst 0
            q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
             sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
             FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
               Children flowsets: 2
            
            

            ipfw pipe show

            
            00001: 450.000 Mbit/s    0 ms burst 0
            q131073  50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
             sched 65537 type FIFO flags 0x0 0 buckets 0 active
            00002: 450.000 Mbit/s    0 ms burst 0
            q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
             sched 65538 type FIFO flags 0x0 0 buckets 0 active
            
            

            ipfw queue show

            q00001  50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
            q00002  50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
            
            

            I also noticed that 'ipfw sched show' has 0 buckets active while it appears the correct would be 1 bucket active. Any idea on what I can do here?

            Screenshot_1.png
            Screenshot_1.png_thumb
            Screenshot_3.png
            Screenshot_3.png_thumb
            Screenshot_4.png
            Screenshot_4.png_thumb
            Screenshot_5.png
            Screenshot_5.png_thumb
            Screenshot_6.png
            Screenshot_6.png_thumb
            Screenshot_7.png
            Screenshot_7.png_thumb

            1 Reply Last reply Reply Quote 0
            • T
              tman222
              last edited by

              @lukezamboni:

              @tman222:

              I've got fq_codel working fine on the latest 2.4.2 release using two root limiters and four queues under each.  The only algorithm parameters I've tweaked from default are the limit, interval, and target.

              Can you show us the output of:

              ipfw sched show
              ipfw pipe show
              ipfw queue show

              Along with a screenshot how your limiters/queues are setup?  That will help us debug things further.

              Hope this helps.

              I have the exact same issue as chrcoluk.

              ipfw sched show

              
              00001: 450.000 Mbit/s    0 ms burst 0
              q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
               sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
               FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                 Children flowsets: 1
              00002: 450.000 Mbit/s    0 ms burst 0
              q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
               sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
               FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                 Children flowsets: 2
              
              

              ipfw pipe show

              
              00001: 450.000 Mbit/s    0 ms burst 0
              q131073  50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
               sched 65537 type FIFO flags 0x0 0 buckets 0 active
              00002: 450.000 Mbit/s    0 ms burst 0
              q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
               sched 65538 type FIFO flags 0x0 0 buckets 0 active
              
              

              ipfw queue show

              q00001  50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                  mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
              q00002  50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                  mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
              
              

              I also noticed that 'ipfw sched show' has 0 buckets active while it appears the correct would be 1 bucket active. Any idea on what I can do here?

              Looking at your setup, can you explain why you chose to have both LAN and WAN queues under each Upload and Download?  Do you use all those queues for separate traffic, subnets, etc.?  A basic setup actually requires just one queue under each limiter.  For example, have a look at post #121:

              https://forum.pfsense.org/index.php?topic=126637.msg754199#msg754199

              If you try those basic settings such as those, do you experience problems?

              Hope this helps.

              1 Reply Last reply Reply Quote 0
              • C
                chrcoluk
                last edited by

                Mine only has one queue for each limiter, and is basically like the post you linked to except with different limits.

                I of course have the script to override using fq_codel meaning the GUI setting stops controlling dummynet, but the GUI settings are the baseline with the only adjustment been the switch to fq_codel.

                I have not had time yet to retest this so I still have no live ipfw pipe etc. outputs, I wish you would accept my word without me having to paste it all, but I am now glad someone has repeated the problem.

                pfSense CE 2.7.2

                1 Reply Last reply Reply Quote 0
                • T
                  tman222
                  last edited by

                  @chrcoluk:

                  Mine only has one queue for each limiter, and is basically like the post you linked to except with different limits.

                  I of course have the script to override using fq_codel meaning the GUI setting stops controlling dummynet, but the GUI settings are the baseline with the only adjustment been the switch to fq_codel.

                  I have not had time yet to retest this so I still have no live ipfw pipe etc. outputs, I wish you would accept my word without me having to paste it all, but I am now glad someone has repeated the problem.

                  Can you remind me of your setup/configuration:

                  1)  What script are you using that you referenced above?  What does it do?  I don't recall having to setup any scripts - only Shellcmd to make sure the fq_codel starts automatically after reach reboot.
                  2)  Do you have your queues applied to your outgoing traffic LAN rule?  Or in a traffic matching rule on your WAN interface?  If the latter, could you show us the configuration?

                  Hope this helps.

                  1 Reply Last reply Reply Quote 0
                  • C
                    chrcoluk
                    last edited by

                    1 - Not a script actually, just creating the /root/rules.limiter file and editing the shaper.inc code to use it.

                    As instructed by the OP of this thread.

                    2 - The pipe is configured in the LAN rules section (outgoing).

                    Configuration same as the screenshot you linked to in your previous post.

                    pfSense CE 2.7.2

                    1 Reply Last reply Reply Quote 0
                    • U
                      uryupinsk
                      last edited by

                      Hi,

                      I have an ADSL asymmetrical connection (5,3 Mbps down, 880 Kbps up at 95 % of max speed, with 25 ms ping).

                      I played with the settings and I got an A on the dslreports bufferbloat test. I just wanted to share my settings with you guys to discuss it and tell me if these are the most optimal for my connection, as there are no real guide on the Internet to tweak these parameters for low rate asymmetrical DSL connection.

                      Download queue

                      ipfw sched 1 config pipe 1 type fq_codel target 5 interval 100 quantum 300 limit 325
                      

                      Upload queue

                      ipfw sched 2 config pipe 2 type fq_codel target 26 interval 208 noecn quantum 300 limit 55
                      
                      1 Reply Last reply Reply Quote 0
                      • H
                        Harvy66
                        last edited by

                        Interval is meant to represent the RTT(ping) of most of your connections. Typically 100ms is ballpark correct for most users. If most of the services you're connecting to are more than 100ms away, then increase it.

                        1 Reply Last reply Reply Quote 0
                        • U
                          uryupinsk
                          last edited by

                          @Harvy66:

                          Interval is meant to represent the RTT(ping) of most of your connections. Typically 100ms is ballpark correct for most users. If most of the services you're connecting to are more than 100ms away, then increase it.

                          So let's say I have 25 ms ping and most of the services I connect to are in the order or 30 ms, I can safely put an interval of 30 ms for both the download and upload queues?

                          And what about the target?

                          1 Reply Last reply Reply Quote 0
                          • H
                            Harvy66
                            last edited by

                            The target should be at least 1.5x the serialization delay of your MTU relative to your bandwidth. For example. A 1500mtu is 12,000 bits times 1.5 is 18,000 bits divided by 1Mbit/s is 18ms. In this case, your target should be at least 18, but not much higher. This does not scale linearly. If you have 10Mbit/s of bandwidth, that doesn't mean you want a 1.8ms target. Generally 5ms is good and is the default. The 5ms default works well from 10Mbit/s all the way to 10Gbit/s. But not to say it is the best for a given bandwidth.

                            This is how target and interval are used.

                            Target is used to determine if a packet has been in the queue for too long. Codel/fq_Codel timestamp when a packet gets enqueued and checks how long the packet has been in the queue when it gets dequeued. If the packet has been in the queue for too long and the queue is not in a drop mode, it will drop/discard the packet. This is where interval comes in. The packets behind that packet are probably as old or older than the packet that just got dropped. But even though those packets are older than the target, Codel will not drop them until at least an interval amount of time has passed. For the next interval of time, all packets will continue to get dequeued as normal.

                            If before the interval is reached, the queue sees a timestamp of less than the target, the queue will leave drop mode as the latency has come down. If by the next interval no packet is seen with a timestamp of less than the target, the queue will drop/discard another packet. Now it will reduce the interval by some non-linear scaling factor, like square-root or something in that ballpark. Rinse and repeat. Keep dequeuing packets until a packet is below the target or the interval is reached again. If the interval is reached, drop the current packet and reduce the interval.

                            The reason the interval is the RTT is because a sender cannot respond faster to a dropped packet than the RTT time. Codel doesn't want to just start dropping a large burst of packets to keep latency low. That would kill bandwidth. It just wants to drop a single packet that is statistically likely to be related to one of the few heavy flows that are clogging the pipe, then waiting an RTT amount of time to see if that fixed the issue. If latency is still high, drop another packet and become more aggressive by reducing the amount of time before it drops again.

                            This allows for a high burst of packets to move through the queue without dropping most of time, but at the same time quickly attacks any flows that attempt to fill up the queue and keep it full. This keeps the queue very short.

                            The only difference with fq_Codel is it has these queues per bucket, and a default of 1024 buckets. It hashes a packet, which causes all packets for a given flow to land in the same bucket, but flows are randomly distributed among all of the buckets. Coupled with a DDR algorithm that tries to fairly distribute time among the buckets, driven by the quantum value, fq_Codel tends to isolate heavy traffic from the lighter traffic, and even where a light flow shared a bucket with a heavy flow, the latency is kept low and the heavy flow statistically is more likely to have its packets dropped.

                            Cake uses "ways" to keep perfect isolation of multiple flows sharing a same bucket, but that's another discussion for something that isn't even complete yet.

                            1 Reply Last reply Reply Quote 0
                            • C
                              chrcoluk
                              last edited by

                              playing with this again

                              root@PFSENSE ~ # ipfw pipe show
                              00001:  17.987 Mbit/s    0 ms burst 0 
                              q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
                               sched 65537 type FIFO flags 0x0 0 buckets 0 active
                              00002:   9.200 Mbit/s    0 ms burst 0 
                              q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
                               sched 65538 type FIFO flags 0x0 0 buckets 0 active
                              root@PFSENSE ~ # ipfw sched show
                              00001:  17.987 Mbit/s    0 ms burst 0 
                              q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                               sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
                               FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                                 Children flowsets: 1 
                              00002:   9.200 Mbit/s    0 ms burst 0 
                              q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                               sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
                               FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                                 Children flowsets: 2 
                              root@PFSENSE ~ # ipfw queue show
                              q00001  50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                                  mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
                              q00002  50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                                  mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
                              

                              after routing on default lan outbound rule

                              root@PFSENSE ~ # ipfw pipe show 
                              00001:  17.987 Mbit/s    0 ms burst 0 
                              q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
                               sched 65537 type FIFO flags 0x0 0 buckets 0 active
                              00002:   9.200 Mbit/s    0 ms burst 0 
                              q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
                               sched 65538 type FIFO flags 0x0 0 buckets 0 active
                              root@PFSENSE ~ # ipfw sched show
                              00001:  17.987 Mbit/s    0 ms burst 0 
                              q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                               sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
                               FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                                 Children flowsets: 1 
                              00002:   9.200 Mbit/s    0 ms burst 0 
                              q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                               sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active
                               FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                                 Children flowsets: 2 
                              BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
                                0 ip           0.0.0.0/0             0.0.0.0/0     109248  9956265 27 3497   9
                              root@PFSENSE ~ # ipfw queue show 
                              q00001  50 sl. 0 flows (16 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                                  mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
                              q00002  50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                                  mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
                              

                              This rises rapidly even tho connection idle

                              BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
                                0 ip           0.0.0.0/0             0.0.0.0/0     109248  9956265 27 3497   9
                              

                              console full of

                              fq_codel_enqueue over limit
                              and
                              fq_codel_enqueue maxidx = <random 3="" 400="" 525="" digits="" usually="" between="" and="">screenshot of relevant part of rule attaching to post

                              Finally if it helps, connectivity doesnt die if the traffic is only light, but any type of bulk download, streaming, speedtest will make "everything" timeout until rules are reloaded or ipfw is flushed or outbound rule is configured to not route to dummynet pipes.

                              pfsensefaildummynet.png
                              pfsensefaildummynet.png_thumb</random>

                              pfSense CE 2.7.2

                              1 Reply Last reply Reply Quote 0
                              • T
                                tman222
                                last edited by

                                @chrcoluk:

                                playing with this again

                                root@PFSENSE ~ # ipfw pipe show
                                00001:  17.987 Mbit/s    0 ms burst 0 
                                q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
                                 sched 65537 type FIFO flags 0x0 0 buckets 0 active
                                00002:   9.200 Mbit/s    0 ms burst 0 
                                q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
                                 sched 65538 type FIFO flags 0x0 0 buckets 0 active
                                root@PFSENSE ~ # ipfw sched show
                                00001:  17.987 Mbit/s    0 ms burst 0 
                                q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                                 sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
                                 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                                   Children flowsets: 1 
                                00002:   9.200 Mbit/s    0 ms burst 0 
                                q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                                 sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
                                 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                                   Children flowsets: 2 
                                root@PFSENSE ~ # ipfw queue show
                                q00001  50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                                    mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
                                q00002  50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                                    mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
                                

                                after routing on default lan outbound rule

                                root@PFSENSE ~ # ipfw pipe show 
                                00001:  17.987 Mbit/s    0 ms burst 0 
                                q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
                                 sched 65537 type FIFO flags 0x0 0 buckets 0 active
                                00002:   9.200 Mbit/s    0 ms burst 0 
                                q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
                                 sched 65538 type FIFO flags 0x0 0 buckets 0 active
                                root@PFSENSE ~ # ipfw sched show
                                00001:  17.987 Mbit/s    0 ms burst 0 
                                q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                                 sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
                                 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                                   Children flowsets: 1 
                                00002:   9.200 Mbit/s    0 ms burst 0 
                                q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                                 sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active
                                 FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN
                                   Children flowsets: 2 
                                BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
                                  0 ip           0.0.0.0/0             0.0.0.0/0     109248  9956265 27 3497   9
                                root@PFSENSE ~ # ipfw queue show 
                                q00001  50 sl. 0 flows (16 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                                    mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
                                q00002  50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                                    mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
                                

                                This rises rapidly even tho connection idle

                                BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
                                  0 ip           0.0.0.0/0             0.0.0.0/0     109248  9956265 27 3497   9
                                

                                console full of

                                fq_codel_enqueue over limit
                                and
                                fq_codel_enqueue maxidx = <random 3="" 400="" 525="" digits="" usually="" between="" and="">screenshot of relevant part of rule attaching to post

                                Finally if it helps, connectivity doesnt die if the traffic is only light, but any type of bulk download, streaming, speedtest will make "everything" timeout until rules are reloaded or ipfw is flushed or outbound rule is configured to not route to dummynet pipes.</random>

                                Those errors usually indicate that the queue size (the limit parameter) is too small.  However, since yours is already very large, something else must be misconfigured for the queue to fill up and run out of space.  It almost seems to me that the queue is filling and not enough traffic is passing through it (for instance if the limiters were not properly configured or there is a rule blocking the traffic).

                                Can you please also show us a screenshot of your limiters and queue configuration (apologies if you already posted this before, but I couldn't find it)?

                                Thanks.

                                1 Reply Last reply Reply Quote 0
                                • w0wW
                                  w0w
                                  last edited by

                                  
                                  Shell Output - ipfw pipe show
                                  
                                  00001: 265.576 Mbit/s    0 ms burst 0
                                  q131073  50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
                                   sched 65537 type FIFO flags 0x0 0 buckets 0 active
                                  00002: 275.576 Mbit/s    0 ms burst 0
                                  q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
                                   sched 65538 type FIFO flags 0x0 0 buckets 0 active
                                  
                                  
                                  
                                  Shell Output - ipfw sched show
                                  
                                  00001: 265.576 Mbit/s    0 ms burst 0
                                  q00001  50 sl. 0 flows (256 buckets) sched 1 weight 1 lmax 0 pri 0 droptail
                                      mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
                                   sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active
                                   FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN
                                     Children flowsets: 1
                                  BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
                                    0 ip           0.0.0.0/0             0.0.0.0/0       58     3664  0    0   0
                                  00002: 275.576 Mbit/s    0 ms burst 0
                                  q00002  50 sl. 0 flows (256 buckets) sched 2 weight 1 lmax 0 pri 0 droptail
                                      mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
                                   sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active
                                   FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN
                                     Children flowsets: 2
                                    0 ip           0.0.0.0/0             0.0.0.0/0       65    96980  0    0   0
                                  
                                  
                                  q00001  50 sl. 0 flows (256 buckets) sched 1 weight 1 lmax 0 pri 0 droptail
                                      mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
                                  q00002  50 sl. 0 flows (256 buckets) sched 2 weight 1 lmax 0 pri 0 droptail
                                      mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
                                  

                                  If we compare ipfw sched show output there is something missing on your side. I am using johnpoz method as it survive upgrades and don't need to mess with patching or editing anything on base  system.
                                  https://forum.pfsense.org/index.php?topic=126637.msg754199#msg754199

                                  1 Reply Last reply Reply Quote 0
                                  • C
                                    chrcoluk
                                    last edited by

                                    probably wont be until saturday, but I will post I tried an install of 2.4.0, restored the config, and it functions correctly, 100% same configuration. Back to 2.4.2 and problem comes back.  (tested yesterday).

                                    I can post limiter config now I guess as its in the GUI but the enable box is unticked.

                                    I got 2 issues with what john posted.

                                    1 - It only applies fq_codel at boot, it will get lost on a limiter reload.
                                    2 - He posted some instructions that were not detailed, meaning I cannot be sure if I follow his setup I am doing it right.

                                    I can tell you that the outbound LAN rule is been hit by the counters that are displayed and also by the fact when I edit the rule to stop using the pipes traffic works again, it wouldnt have that impact if there was another rule above it intercepting the traffic.  Not to mention I dont have any blocking outbound rules other than pfblockerNG used for dnsbl stuff.

                                    What is missing on the ipfw sched show? I dont notice anything.

                                    pfSense CE 2.7.2

                                    1 Reply Last reply Reply Quote 0
                                    • w0wW
                                      w0w
                                      last edited by

                                      @chrcoluk:

                                      probably wont be until saturday, but I will post I tried an install of 2.4.0, restored the config, and it functions correctly, 100% same configuration. Back to 2.4.2 and problem comes back.  (tested yesterday).

                                      I can post limiter config now I guess as its in the GUI but the enable box is unticked.

                                      I got 2 issues with what john posted.

                                      1 - It only applies fq_codel at boot, it will get lost on a limiter reload.
                                      2 - He posted some instructions that were not detailed, meaning I cannot be sure if I follow his setup I am doing it right.

                                      1. It applies every time when something causes reload of packages or at boot. Actually I don't understand why you need to reload limiter.
                                      2. May be. It depends.

                                      You can just configure your limiters via GUI and then run command via GUI command line:

                                      /sbin/ipfw sched 1 config pipe 1 type fq_codel target 7ms quantum 2000 flows 2048 && /sbin/ipfw sched 2 config pipe 2 type fq_codel target 7ms quantum 2000 flows 2048  
                                      

                                      Make sure that you have not messed up with traffic direction and masks.

                                      Show your gui config, including LAN rule IN/OUT pipe and modded rules.limiter

                                      @chrcoluk:

                                      What is missing on the ipfw sched show? I dont notice anything.

                                      Mask is missing.

                                      Actually I am on 2.4.3 and I am not sure is there something broken on 2.4.2

                                      1 Reply Last reply Reply Quote 0
                                      • C
                                        chrcoluk
                                        last edited by

                                        ok tonight I will unpatch pfsense so it doesnt use the /root/rules.limiter.

                                        I will just hit apply in the limiter config to apply, and then add the command to boot script same as john, and reboot for it to apply.

                                        Then I will check to see if the mask appears on ipfw sched show.  (PF firewall rules will have no impact on that).

                                        The masks have been checked I have lost count amount of times, they are no different to the guide posted by OP and what john has set.  It is an interesting observation its missing from ipfw sched show, but thats not down to a misconfiguration in the GUI.  Although I cannot rule out the patch been the culprit until I unpatch and retest which I will do tonight, but remember I said this is working fine on pfsense 2.4.0.

                                        If you guys consider the OP's post wrong, then maybe a new thread should be made as I expect most people to follow the first post, not go several pages into something posted halfway thru :)

                                        –edit--

                                        I have done it just now running the command via the gui command as you suggested.

                                        output of ipfw sched show

                                        
                                        00001:  79.987 Mbit/s    0 ms burst 0 
                                        q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                                         sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
                                         FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN
                                           Children flowsets: 1 
                                        00002:  20.000 Mbit/s    0 ms burst 0 
                                        q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                                         sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
                                         FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN
                                           Children flowsets: 2 
                                        

                                        The gui config is no different to the screenshots I already posted, the only difference been the patch is disabled, and I ran the command provided to apply fq_codel.

                                        attaching gui limiter config, I didnt post that before sorry, but the LAN rule is already posted. You can see is just cosmetic differences in the naming of the limiters, and the bandwidth limit.

                                        I enabled logging of the outbound LAN rule and sure enough the logs verify the rule is correctly been hit.

                                        	Jan 25 08:07:51	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.124:49240	  80.249.103.8:443	TCP:S
                                        Jan 25 08:07:51	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.124:49239	  80.249.103.8:443	TCP:S
                                        Jan 25 08:07:51	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.124:49238	  80.249.103.8:443	TCP:S
                                        Jan 25 08:07:51	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.124:49237	  80.249.103.8:443	TCP:S
                                        Jan 25 08:07:51	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.124:49236	  80.249.103.8:443	TCP:S
                                        Jan 25 08:07:51	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.124:49235	  80.249.103.8:443	TCP:S
                                        Jan 25 08:07:50	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.186:55112	  129.70.132.34:123	UDP
                                        Jan 25 08:07:49	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.186:45591	  89.238.136.135:123	UDP
                                        Jan 25 08:07:49	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.186:59559	  194.1.151.226:123	UDP
                                        Jan 25 08:07:49	LAN	 Default allow LAN to any rule (1513840726)	  192.168.1.186:43958	  85.199.214.99:123	UDP
                                        

                                        From where I sit, with the facts in front of me.

                                        Configuration looks good.
                                        Works properly on 2.4.0
                                        I have tested using john's method.

                                        PfSense 2.4.2 is now closed kernel source (no public repo), so I cannot rule out any code changes they may have done breaking dummynet.

                                        If I set the in/out pipe to none on the LAN outbound rule everything works again (albeit without the limiter processing the traffic).  If I had an issue with the LAN rules not processing the right traffic then that wouldnt be happening.

                                        I appreciate your help of course, and its a very nice catch to notice the mask is missing from my schedulers, even tho it is configured, and does show on the queues.

                                         ipfw queue show    
                                        q00001  50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
                                            mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
                                        q00002  50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
                                            mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
                                        

                                        I have just noticed another problem myself.

                                        See in my ipfw queue show the 2 queues are q00001 and q00002

                                        Yet in ipfw sched show the 2 queues are q65537 and q65538

                                        In ipfw pipe show the 2 queues are q131072 and q131074

                                        Now the OP has the same anomaly, but on john's post and your own post, you both dont, the queues on the ipfw sched show match the queues in ipfw queue show so q0001 and q00002

                                        pf1.png
                                        pf1.png_thumb
                                        pf2.png
                                        pf2.png_thumb
                                        pf3.png
                                        pf3.png_thumb
                                        pf4.png
                                        pf4.png_thumb

                                        pfSense CE 2.7.2

                                        1 Reply Last reply Reply Quote 0
                                        • C
                                          chrcoluk
                                          last edited by

                                          another quick update.

                                          I copied ipfw.ko and dummynet.ko from 2.4.0 and it works properly.

                                          I have q00001 and q00002 in ipfw sched show and the masks are present.

                                          00001:  80.000 Mbit/s    0 ms burst 0 
                                          q00001  50 sl. 0 flows (256 buckets) sched 1 weight 1 lmax 0 pri 0 droptail
                                              mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
                                           sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
                                           FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN
                                             Children flowsets: 1 
                                          00002:  20.000 Mbit/s    0 ms burst 0 
                                          q00002  50 sl. 0 flows (256 buckets) sched 2 weight 1 lmax 0 pri 0 droptail
                                              mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
                                           sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
                                           FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN
                                             Children flowsets: 2 
                                          

                                          So I plan to reinstall pfsense from a new clean install image to see if the modules are ok that way, as seems I got a module issue somewhere, thats exhibited itself on this 2.4.2 installation. The internet no longer dies now with the limiters activated.

                                          pfSense CE 2.7.2

                                          1 Reply Last reply Reply Quote 0
                                          • C
                                            chrcoluk
                                            last edited by

                                            now its working and I have had a period of time using it I can provide some feedback on its performance.

                                            Affects on latency for uploads are better than ALTQ which matches my earlier experience.
                                            Downloads seem similar performing to ALT+HFSC but the setup in terms of managing priorities etc, is greatly simplified for the similar result although I have done no recent steam testing.  The setup here is no priorities just the one downstream pipe for all traffic.

                                            pfSense CE 2.7.2

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.