Playing with fq_codel in 2.4



  • @markn6262 Gotcha, yeah, I was replying to your response to me. The config I posted should work and the change you specified in your response should not be required.

    @markn6262 said in Playing with fq_codel in 2.4:

    @uptownvagrant said in Playing with fq_codel in 2.4:

    ICMP subtypes: Traceroute

    I find Win tracert does not work with your config. Ping does, however. I changed your two workaround rules to one ICMP any in/out & tracert then works.



  • @gsmornot you are very fortunate. :) I'm curious, what amount of bufferbloat were you seeing before implementing shapers?



  • @uptownvagrant said in Playing with fq_codel in 2.4:

    @gsmornot you are very fortunate. :) I'm curious, what amount of bufferbloat were you seeing before implementing shapers?

    I am not as technical with this as you guys have been but will say DSLReports gives me a D on average without the config I shared. The grade is all due to upload. Download has never been an issue here.

    I have tried the config like you posted. On average it gives me a C with an occasional B. My guess is power in the 3100 but just a guess. Keeping it simple just works best for this hardware. I will say with a config like your on my older desktop I pull out time to time I would get As. But it has much more power and needs more power which is my reason for the 3100.



  • @uptownvagrant said in Playing with fq_codel in 2.4:

    1.) Create "Out" limiter

    Tick "Enable"
    Name: FQ_CODEL_OUT
    Bandwidth: 90 Mbit/s
    Mask: None
    Queue Management Algorithm: Tail Drop
    Scheduler: FQ_CODEL

    target: 5
    interval: 100
    quantum: 1514
    limit: 10240
    flows: 1024

    Click Save/Apply Changes

    2.) Add "Out" queue

    Tick "Enable"
    Name: fq_codel_out_q
    Mask: None
    Queue Management Algorithm: Tail Drop
    Click Save/Apply Changes

    3.) Create "In" limiter

    Tick "Enable"
    Name: FQ_CODEL_IN
    Bandwidth: 90 Mbit/s
    Mask: None
    Queue Management Algorithm: Tail Drop
    Scheduler: FQ_CODEL

    target: 5
    interval: 100
    quantum: 1514
    limit: 10240
    flows: 1024

    Click Save/Apply Changes

    4.) Add "In" queue

    Tick "Enable"
    Name: fq_codel_in_q
    Mask: None
    Queue Management Algorithm: Tail Drop
    Click Save/Apply Changes

    I am attempting to duplicate your configuration, but wanted to confirm which limiter is upload and which is download so that I can set my bandwidths properly. I'm guessing that "In" is upload and "Out" is download?



  • It depends on the perspective. I normally put myself in place of the device.

    So If I apply on WAN interface, "IN" is download and "OUT" is upload. So for the first floating rule matching incomming in You do regular IN/OUT, but for the second rule which matches outgoing traffic You reverse the queues.

    Still, the configuration of @uptownVagrant confuses me. I can confirm however that under load ping and traceroutes are not dropped.



  • @xciter327 said in Playing with fq_codel in 2.4:

    It depends on the perspective.

    Yes, I agree. By my perspective, traffic entering (IN) the WAN port would be leaving my local network (upload). Traffic exiting (OUT) the WAN port would be entering my local network (download). IIRC this was the same perspective used by Netgate in their recent hangout regarding fq_codel. It's all a little confusing, that's why I wanted to confirm with @uptownVagrant .



  • @wgstarks I've updated the post to better explain IN and OUT. From the perspective of WAN, IN is traffic coming into the interface from the Internet, aka download, and OUT is traffic leaving the interface and bound for the Internet, aka upload. Hope that clears things up.



  • @uptownvagrant
    Thanks for clarifying.

    Also (minor issue), in rules creation step 3 I couldn’t select “Default” for gateway. Had to actually pick a gateway as you did in step 4.



  • @wgstarks Make sure you are choosing "Direction: In" on that rule. IN rules do not require a gateway be selected.



  • @uptownvagrant
    Worked. Thanks again.



  • For those interested, here is a quick comparison I ran today of ALTQ shapers using FAIRQ + CoDel and DummyNet limiters using FQ-CoDel. I set both ALTQ shapers and the limiters to 94 Mbit/s.

    0_1543553108516_lab.jpg

    0_1543552097254_FairQ+CoDel_vs_FQ-CoDel.jpg

    0_1543552183964_rrul-2018-11-29T175857.808604.rrul_C2758_pfSense2_4_4_100Mb_FairQ_CoDel_netperf2.flent.gz
    0_1543552199424_rrul-2018-11-28T183120.881107.rrul_C2758_pfSense2_4_4_100Mb_FQ-CoDel_netperf2.flent.gz



  • @uptownvagrant Thanks!

    fq_codel is definitely better, especially for upload bandwidth, but fairq+codel is not that bad for how easy it is to setup.



  • This post is deleted!


  • @ricardox

    Substitua CoDel por TailDrop. Não use o CoDel AQM com FQ-CoDel.

    My apologies if the English to Portuguese translation is incorrect. Replace CoDel with TailDrop. Don't use CoDel AQM with FQ-CoDel.



  • This post is deleted!


  • @harvy66 Okay, thank you so much, it worked.



  • This config work perfect! for 1 wan
    I have 2 adsl, with balance group (tier 1 both wans).
    But dont know how to config the floating rules to use this balance group.
    The wans have different speeds.

    Thanks!



  • I do not speak English, but google speaks for me KKK, create 2 floating rules one for each gateway this way :0_1544128601650_Rules.png



  • @ricardox Thanks, I'm going to try it
    This is what I have
    WAN1 and WAN2

    Create 8 floating rules (following the example of @uptownVagrant )
    but I think that is not the correct way to achieve the desired goal

    1.)
    Action: Pass
    Quick: Tick Apply the action immediately on match.
    Interface: WAN1
    Direction: out
    Address Family: IPv4
    Protocol: ICMP
    ICMP subtypes: Traceroute
    Source: any
    Destination: any

    Action: Pass
    Quick: Tick Apply the action immediately on match.
    Interface: WAN2
    Direction: out
    Address Family: IPv4
    Protocol: ICMP
    ICMP subtypes: Traceroute
    Source: any
    Destination: any

    3.)
    Action: Pass
    Quick: Tick Apply the action immediately on match.
    Interface: WAN1
    Direction: any
    Address Family: IPv4
    Protocol: ICMP
    ICMP subtypes: Echo reply, Echo Request
    Source: any
    Destination: any
    Description: limiter drop echo-reply under load workaround

    4.)
    Action: Pass
    Quick: Tick Apply the action immediately on match.
    Interface: WAN2
    Direction: any
    Address Family: IPv4
    Protocol: ICMP
    ICMP subtypes: Echo reply, Echo Request
    Source: any
    Destination: any
    Description: limiter drop echo-reply under load workaround

    5.)
    Action: Match
    Interface: WAN1
    Direction: in
    Address Family: IPv4
    Protocol: Any
    Source: any
    Destination: any
    Gateway: WAN1
    In / Out pipe: fq_codel_WAN1_in_q / fq_codel_WAN1_out_q

    6.)
    Action: Match
    Interface: WAN2
    Direction: in
    Address Family: IPv4
    Protocol: Any
    Source: any
    Destination: any
    Gateway: WAN2
    In / Out pipe: fq_codel_WAN2_in_q / fq_codel_WAN2_out_q

    7.)
    Action: Match
    Interface: WAN1
    Direction: out
    Address Family: IPv4
    Protocol: Any
    Source: any
    Destination: any
    Gateway: WAN1
    In / Out pipe: fq_codel_WAN1_out_q / fq_codel_WAN1_in_q

    8.)
    Action: Match
    Interface: WAN2
    Direction: out
    Address Family: IPv4
    Protocol: Any
    Source: any
    Destination: any
    Gateway: WAN2
    In / Out pipe: fq_codel_WAN2_out_q / fq_codel_WAN2_in_q



  • @mr-cairo 0_1544132824688_1.png 0_1544132834067_2.png 0_1544132845423_3.png

    Do the same for Upload



  • So an update from myself.

    Still trying to perfect the interactive packets been dropped issue, I noticed pie is now implemented on the AQM.

    So what I have done for now to test on my own network is changed droptail to pie for the pipe and queue, scheduler is still using fq_codel.

    Also in regards to the dynamic flow sets, it seems something isnt quite right, as the OP pointed out, there is just one single flow created and always shows src and target ip as 0.0.0.0 so regardless of what mask is configured there will always only be one flow and I suspect if multiple flows can be made to work results will be better. The reason been if you have say two flows, one is a full speed saturated download over fast ftp and the other is from an interactive ssh session, the latter will never fill its flow queue and wont drop packets, while the first would fill and drop packets, as it is now both tcp streams share the same flow, so how does the shaper know which packets to drop? the only help on that seems to come from the quantum value which prevents smaller packets from been put at the end of the queue.

    --update--

    Ok I have managed to get the multiple flows working. Was a lot of experimentation but I got there after finally understanding what the man page for ipfw/dummynet is explaining. If you get it wrong dummynet stops passing traffic ;) but I am only testing on my home network so is fine.

    --update2--

    I chose to have a flow per internet ip rather than per lan device. Also this is only for ipv4 right now, I think if you configure it so it overloads the flow limit then it breaks, and obviously the ipv6 address space is way bigger.

    root@PFSENSE home # ipfw sched show
    00001:  69.246 Mbit/s    0 ms burst 0 
    q00001 500 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
        mask:  0x00 0x000000ff/0x0000 -> 0x00000000/0x0000
     sched 1 type FQ_CODEL flags 0x1 256 buckets 4 active
     FQ_CODEL target 5ms interval 30ms quantum 300 limit 1000 flows 1024 ECN
        mask:  0x00 0xffffff00/0x0000 -> 0x00000000/0x0000
       Children flowsets: 1 
    BKT ___Prot___ _flow-id_ ______________Source IPv6/port_______________ _______________Dest. IPv6/port_______________ Tot_pkt/bytes Pkt/Byte Drp
      0        ip       0                                       ::/0                                           ::/0        6      288  0    0   0
     17 ip        46.17.x.x/0             0.0.0.0/0        4      176  0    0   0
    123 ip      208.123.x.x/0             0.0.0.0/0        5      698  0    0   0
    249 ip       80.249.x.x/0             0.0.0.0/0        6      168  0    0   0
    00002:  18.779 Mbit/s    0 ms burst 0 
    q00002 500 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
        mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
     sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
     FQ_CODEL target 5ms interval 300ms quantum 300 limit 800 flows 1024 NoECN
       Children flowsets: 2 
    

    --update 3--

    Initial tests this is working sweet, on the below paste you can clearly see the bulk download flow and notice it only drops packets from that flow. I also tested on steam and zero dropped packets now on flows outside of steam, wow.

    BKT ___Prot___ _flow-id_ ______________Source IPv6/port_______________ _______________Dest. IPv6/port_______________ Tot_pkt/bytes Pkt/Byte Drp
      0        ip       0                                       ::/0                                           ::/0        7      348  0    0   0
     17 ip        46.17.x.x/0             0.0.0.0/0        3      132  0    0   0
     58 ip      216.58.x.x/0             0.0.0.0/0        1       52  0    0   0
    123 ip      208.123.x.x/0             0.0.0.0/0        5      698  0    0   0
    135 ip       5.135.x.x/0             0.0.0.0/0     11919 17878500  3 4500   2
    249 ip       80.249.x.x/0             0.0.0.0/0        6      168  0    0   0
    254 ip     162.254.x.x/0             0.0.0.0/0        1       40  0    0   0
    


  • @chrcoluk said in Playing with fq_codel in 2.4:

    So an update from myself.

    Still trying to perfect the interactive packets been dropped issue, I noticed pie is now implemented on the AQM.

    What interactive packets being dropped issue? Can you be specific? How are you measuring drops?

    So what I have done for now to test on my own network is changed droptail to pie for the pipe and queue, scheduler is still using fq_codel.

    When FQ-CoDel is used as the scheduler, the AQM you choose is not utilized. (fq_codel has separate enqueue/dequeue functions - AQM is ignored) https://forum.netgate.com/post/804118

    Also in regards to the dynamic flow sets, it seems something isnt quite right, as the OP pointed out, there is just one single flow created and always shows src and target ip as 0.0.0.0 so regardless of what mask is configured there will always only be one flow and I suspect if multiple flows can be made to work results will be better. The reason been if you have say two flows, one is a full speed saturated download over fast ftp and the other is from an interactive ssh session, the latter will never fill its flow queue and wont drop packets, while the first would fill and drop packets, as it is now both tcp streams share the same flow, so how does the shaper know which packets to drop? the only help on that seems to come from the quantum value which prevents smaller packets from been put at the end of the queue.

    What you are seeing using 'ipfw sched show' is not exposing the dynamic internal flows and sub-queues that FQ-CoDel is managing. It seems you are trying to create a visual representation, via 'ipfw sched show', of the mechanism that is already in place and operating within FQ-CoDel. See this post https://forum.netgate.com/post/803139



  • I am not convinced there is anything more than 1 flow on the existing configuration, especially after reading this and taking it all in, be warned its a big lot of text.

    https://www.freebsd.org/cgi/man.cgi?query=ipfw&sektion=8&apropos=0&manpath=FreeBSD+11.2-RELEASE+and+Ports

    But you can skip to the dummynet section.

    I am measuring drops in two ways.

    By looking at the output from the command I pasted on here, (I assumed that was obvious), and also holding a key down in a ssh terminal session whilst downloading bulk data to see if there is visible packet loss. Packet loss is very visible in a ssh terminal session as the cursor will stick and jump.

    With a single flow fq_codel is fine at reducing jitter and latency, and that is all 95% seems to care about on here, just to get their dslreports grade scores up. However I did observe it was not able to differentiate between different tcp streams and as such not intelligently drop packets, the droptail AQM e.g. just blocks new packets entering the queue when its full, regardless of what those packets are.

    Note that for flow separation the dummynet configuration states a mask needs to be set in the queue as well as the scheduler.

    When running ipfw sched show, there is 2 indications of flows.

    One is the number of buckets which stays as 1 on the older configuration, the other is on ipfw queue show and and the number of flows on that which stays at 0.

    I managed to figure out the syntax for ipv6 flow separation as well based on how its configured for ipv4.

    Now I am not saying you are wrong in that there is invisible flows not presented by the status commands, I am just saying I am seeing no evidence of that, based on performance metrics and the tools provided that show the active status of dummynet.

    If you want to see the changes I made they are as following, these commands will generate a flow for each remote ipv4 address on downloads.

    ipfw sched 1 config pipe 1 queue 500 mask src-ip 0xffffff00 type fq_codel ecn target 5 interval 30 quantum 300 limit 1000
    ipfw queue 1 config sched 1 queue 500 mask src-ip 0x000000ff

    You dont have to do this and it is more complex, if all you care about is bufferbloat then the existing single flow configuration is fine. As that's what it does it reduces bufferbloat. But if you want flow seperation, then the dummynet man page seems to clearly state that masks have to be set in scheduler and queue.

    It	is important to	understand the role of the SCHED_MASK and FLOW_MASK,
         which are configured through the commands
    	   ipfw	sched N	config mask SCHED_MASK ...
         and
    	   ipfw	queue X	config mask FLOW_MASK ....
    
         The SCHED_MASK is used to assign flows to one or more scheduler
         instances,	one for	each value of the packet's 5-tuple after applying
         SCHED_MASK.  As an	example, using ``src-ip	0xffffff00'' creates one
         instance for each /24 destination subnet.
    
         The FLOW_MASK, together with the SCHED_MASK, is used to split packets
         into flows.  As an	example, using ``src-ip	0x000000ff'' together with the
         previous SCHED_MASK makes a flow for each individual source address.  In
         turn, flows for each /24 subnet will be sent to the same scheduler
         instance.
    

    Sorry about the colour changes the source text isnt like that. There is mention of schedule instances which may be what you are reffering to, but these dont seem to provide intelligent dropping of packets whilst flow separation does.



  • Thanks for the info regarding AQM.

    I did more testing, and I also get good results by just setting a mask on the queue only, leaving sched as default so just the single flow but reversing the mask.

    So basically previously the original config was mask set so dst-ip on ingress and src-ip on egress, meaning the flows are per device on the LAN. I never wanted this, but I suppose good if you run an isp.
    I then created this new config which creates seperate flows on the scheduler which is visible on the diagnostics, but forgot I also reversed the masking, this fixed my packet loss.
    I then went back to the queue only mask, but also with the reversed mask so src-ip instead of dst-ip for ingress, and this was good as well.

    I wont bother testing alternative AQM's it was only briefly tried, and all thee testing with separate flows was done back on droptail, I will just accept what both of you say on that. So for me it simply seems all I had to do was reverse the masking so I have dynamic queues per remote host instead of per local host.



  • @chrcoluk Thank you for posting all of the details of your test method and config - much appreciated.

    I'm on the US west coast so I'll dig into this later today, just after 0100 currently, to see if I can recreate your findings. That being said, I'm pinging @Rasool on this as my understanding is that your configuration should not be needed for flow separation as RFC 8290 states that FQ-CoDel is doing its own 5-tuple to identify flows.



  • @chrcoluk I am not able to recreate your findings based on the information you gave.

    I tested using two Flent clients running overlapping RRUL tests while I used a separate client to initiate an SSH session to a separate destination and held down a key and watched for any sticking or jumping of the cursor. I did not experience any sticking or jumping. The RRUL tests ran for 130 seconds each, overlapping, and easily saturated the limiters during the test. I set the limiters to more closely match what you had set for bandwidth.

    A couple of things i noticed about your config.

    • You are using intervals, quantums, and limits that are not the default. If you set these to the defaults for each pipe does that change the behavior of SSH session while the limiters are under load?
    • If you half your IN and OUT limiter bandwidth values, while using the default FQ-CoDel scheduler values, does this change the behavior of the SSH session while the limiters are under load?

    Here is what my pipes, schedulers, and queues look like. I'm using the configuration I posted here https://forum.netgate.com/post/807490:

    Limiters:
    00001:  18.000 Mbit/s    0 ms burst 0 
    q131073  50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
     sched 65537 type FIFO flags 0x0 0 buckets 0 active
    00002:  69.000 Mbit/s    0 ms burst 0 
    q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
     sched 65538 type FIFO flags 0x0 0 buckets 0 active
    
    
    Schedulers:
    00001:  18.000 Mbit/s    0 ms burst 0 
    q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
     sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active
     FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 NoECN
       Children flowsets: 1 
    00002:  69.000 Mbit/s    0 ms burst 0 
    q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
     sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
     FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 NoECN
       Children flowsets: 2 
    
    
    Queues:
    q00001  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
    q00002  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
    


  • @uptownvagrant

    @uptownvagrant said in Playing with fq_codel in 2.4:

    That being said, I'm pinging @Rasool on this as my understanding is that your configuration should not be needed for flow separation as RFC 8290 states that FQ-CoDel is doing its own 5-tuple to identify flows.

    You are right. FQ-CoDel internally identifies flows and hashes them to internal sub-queues. External flow separation and hashing are not required in simple configurations.



  • If I reduce my download pipe e.g. to say half yes there is no packet loss regardless as the line has way more unused buffer then. Well actually sorry there was still packetloss, but on the half speed they tended to be early on only in the test.

    The commands I posted are not what I am actually using, I am a bit looser to default than what was posted, but quantum 300 is used as is recommended to improve performance of smaller packets and it does indeed help my ssh packets (worse on default), I am now back on the default 50 pipe depth.

    Interval was manually tuned by me to suit my network conditions. Although I feel that one to have no measurable affect either good or bad.

    I have no experience of RRUL so I dont know if that would have affected me, its not just any download that would cause dropped packets, usually just things like steam which absolutely flood the ingress pipe with tons of packets.

    But it doesnt matter for me now as by making the dynamic queues one per remote ip, it seems to be pretty much all resolved for me now. It wasnt a major issue I had already had it working pretty well, but this change has perfected it.

    But before you ask I have obviously used all default values, I would have done the very first time I used fq_codel and dummynet, I always start of by keeping things as simple as possible, and only stray from that if I feel a need to.

    Bear in mind the vast majority of my home lan is just my desktop pc, all other network devices are almost always idle aside from my STB,and if I am watching streams on the STB then I am not on my PC and interactivity doesnt matter on my PC then, so really the masks that I was using before that separated per lan device was pointless for me, but the other way round its really useful.

    00001:  69.246 Mbit/s    0 ms burst 0 
    q00001  50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
        mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
     sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active
     FQ_CODEL target 5ms interval 30ms quantum 300 limit 1000 flows 1024 ECN
       Children flowsets: 1 
    BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
      0 ip           0.0.0.0/0             0.0.0.0/0        1       60  0    0   0
    00002:  18.779 Mbit/s    0 ms burst 0 
    q00002  50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
        mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
     sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active
     FQ_CODEL target 5ms interval 300ms quantum 300 limit 800 flows 1024 NoECN
       Children flowsets: 2 
    
    q00001  50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
        mask:  0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
    q00002  50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
        mask:  0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
    
    00001:  69.246 Mbit/s    0 ms burst 0 
    q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
     sched 65537 type FIFO flags 0x0 0 buckets 0 active
    00002:  18.779 Mbit/s    0 ms burst 0 
    q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
     sched 65538 type FIFO flags 0x0 0 buckets 0 active
    


  • @chrcoluk said in Playing with fq_codel in 2.4:

    If I reduce my download pipe e.g. to say half yes there is no packet loss regardless as the line has way more unused buffer then. Well actually sorry there was still packetloss, but on the half speed they tended to be early on only in the test.

    The idea behind this question was have you set your bandwidth too close to where your ISP is dropping packets? I was assuming, based on your bandwidth settings, that you were testing over the Internet and your ISP, as well as other outside factors, are in the mix. What does your bufferbloat look like when you half the bandwidth? There is going to be some packet loss, that's actually the point, but my experience so far is that FQ-CoDel is doing a pretty great job at prioritizing interactive flows and I haven't been able to recreate the poor handling of these flows that a couple folks have mentioned.

    The commands I posted are not what I am actually using, I am a bit looser to default than what was posted, but quantum 300 is used as is recommended to improve performance of smaller packets and it does indeed help my ssh packets (worse on default), I am now back on the default 50 pipe depth.

    Setting the pipe or queue 'queue length' is ignored when you are using FQ-CoDel. CoDel is handling the queue length dynamically.

    Interval was manually tuned by me to suit my network conditions. Although I feel that one to have no measurable affect either good or bad.

    I have no experience of RRUL so I dont know if that would have affected me, its not just any download that would cause dropped packets, usually just things like steam which absolutely flood the ingress pipe with tons of packets.

    I don't personally use Steam but I have it on my network. I'm assuming you are referring to game downloads when you mention "absolutely flood the ingress pipe with tons of packets"?

    But it doesnt matter for me now as by making the dynamic queues one per remote ip, it seems to be pretty much all resolved for me now. It wasnt a major issue I had already had it working pretty well, but this change has perfected it.

    Glad it's working perfectly for you now, I was hoping that we could identify the root cause.



  • yes I mean steam downloads, typically there is at least 24 tcp streams opened but often over 30 if its a large game. If you have a local download server configured then thats 24+ tcp streams with low rtt flooding the network. Steam has no persistent way of reducing the number of connections, you can reduce it via a command in the console but it is not saved, so when you restart steam it goes back to default.

    If you set a server on the other side of the world so rtt is way higher, it is another way to mitigate the problem as higher rtt makes the packets much more passive on the network. So e.g. if from asia to the uk I can still hit full speed downloads simply due to the really high number of threads, 32 threads is way overkill on a 10ms latency server.

    What I am considering doing is changing the mask to a /24 instead of /32 for remote as often the steam ip's come from the same /24, which will make them fight with each other in one dynamic queue, but still separate them from other dynamic queue, that will be even more effective than what I have now.

    Interesting that fq_codel also seems to make a lot of tunables useless, this really needs to be documented somewhere because its not documented on the dummynet or pfsense pages. But I did never notice a difference between 1000 slots and 50 slots anyway which is why it was reduced back to the default 50 slots, that explains why no difference was observed.

    What I just want to say here is sometimes on the internet people dont like it when people go away from defaults, you 2 guys have been fine, as you have been very polite about it, but sometimes I have come across people who even get angry :), but there is a reason things can be tuned is that you cannot set something as a default that works in 100% of situations optimally. Its impossible. Even auto tuning algorithms can not be 100%.

    But to me steam is the ultimate test, I have yet to come across any speedtester or other automated testing tool that abuses the network as much as steam, steam basically ddos's your network as far as I am concerned, its abusive to use so many threads. Because the only way I have found to stress a network harder is to ddos it.

    I m happy to try and put your mind's at rest to test quantum 1514 whils tusing these masks, quantum 300 was suggested by someone earlier in this thread and is also suggested by a fq_codel expert somewhere on the net, it supposedly makes queues with packets smaller than the quantum size have higher priority than queues with packets larger than the quantum size so less likely to get packets dropped when the pipe is full.

    I am lucky my isp has no visible congestion so I can hit my line rate 24/7/365. Which makes configuring a pipe size easier. I did take into account overheads for dsl etc. so the rate configured for the pipe is lower than actual achievable tcp speeds "after" overheads. The only thing I have to watch out for of course is if my dsl sync speed changes so it syncs lower, but thankfully this has been stable for a long time now.



  • @chrcoluk said in Playing with fq_codel in 2.4:

    yes I mean steam downloads, typically there is at least 24 tcp streams opened but often over 30 if its a large game. If you have a local download server configured then thats 24+ tcp streams with low rtt flooding the network. Steam has no persistent way of reducing the number of connections, you can reduce it via a command in the console but it is not saved, so when you restart steam it goes back to default.

    If you set a server on the other side of the world so rtt is way higher, it is another way to mitigate the problem as higher rtt makes the packets much more passive on the network. So e.g. if from asia to the uk I can still hit full speed downloads simply due to the really high number of threads, 32 threads is way overkill on a 10ms latency server.

    Thanks for detailing what you're seeing with regard to Steam game downloads. I'll see if I can recreate your experience on one of my networks.

    Interesting that fq_codel also seems to make a lot of tunables useless, this really needs to be documented somewhere because its not documented on the dummynet or pfsense pages. But I did never notice a difference between 1000 slots and 50 slots anyway which is why it was reduced back to the default 50 slots, that explains why no difference was observed.

    This is why I brought it up because it's not well documented.

    What I just want to say here is sometimes on the internet people dont like it when people go away from defaults, you 2 guys have been fine, as you have been very polite about it, but sometimes I have come across people who even get angry :), but there is a reason things can be tuned is that you cannot set something as a default that works in 100% of situations optimally. Its impossible. Even auto tuning algorithms can not be 100%.

    But to me steam is the ultimate test, I have yet to come across any speedtester or other automated testing tool that abuses the network as much as steam, steam basically ddos's your network as far as I am concerned, its abusive to use so many threads. Because the only way I have found to stress a network harder is to ddos it.

    I m happy to try and put your mind's at rest to test quantum 1514 whils tusing these masks, quantum 300 was suggested by someone earlier in this thread and is also suggested by a fq_codel expert somewhere on the net, it supposedly makes queues with packets smaller than the quantum size have higher priority than queues with packets larger than the quantum size so less likely to get packets dropped when the pipe is full.

    A quantum of 300 seems perfectly sane for your use case and test scenario. If you look at the sqm-scripts for OpenWRT, which was provided by CeroWrt’s SQM with optional packages @bufferbloat.net, you will see a FQ-CoDel quantum of 300 and 1514 depending on if PRIO is being used on children - the simplest scripts are using the default quantum of 1514. My observation was that you were not using defaults and my experience has been that sometimes folks don't really understand all of the nerd knobs, I know I've been guilty of this, and they start making changes based on collecting out of context bits of information on the Internet. In short, we sometimes act against our own interest in the pursuit of "tuning" - not saying that you are, just wanted to see if defaults had any positive impact. For instance, setting "limit" to anything under the default only seems beneficial on systems severely memory constrained - the hard limit should be rarely, if ever, hit as drops are performed by CoDel long before "limit" would be hit. (RFC 8290, section 5.2.3) You run the risk of setting limit too low and experiencing drops, "fq_codel_enqueue over limit", before they should based on your target. A limit of 1001 should be OK but leaving it at the default 10240 on your system may drop less without any adverse effects seeing as your system has 4GB of RAM.



  • I want my ISP to replace my puma modem with broadcom modem but they refuse to replace it until I show them that my modem is bad. As you have seen my modem still has high latency in flent tests despite fq_codel. Is there anything more I can do to show the tech that the modem is cause if this issue?



  • @uptownvagrant The limit value has now also been reset to the default of 10240, the only tuned value in my rules now is basically quantum and the masks been reversed. Thanks for your help.

    Also I do agree that tuning can most definitely make things worse, the enqueue stuff that appears in logs is caused by the limit value been set too low, which is probably why it has a high default. The bufferbloat website suggests reducing it when speed is below 100mbit, but I tested their recommendation and it generated lots of enqueue over limit warnings, which as muppet pointed if too many of them appear it can bring down the OS.



  • @chrcoluk

    I was able to test Steam distributed downloads using your config minus the masks. I was not able to recreate your findings and I'm still wondering if the interactive flow drops, SSH session, were due to something upsteam from you or there was a very low probability hash collision. If it was consistent then a hash collision is highly improbable.

    Notes:

    • My testing with Flent and netperf actually pushes more pps and more flows then what I saw with Steam.
    • The bandwidths configured to your values are a very small percentage of what my lab upstream up/down limits are so those were not a factor in my testing.
    • ICMP and HTTPS webUI traffic was not passing through the limiter. All other traffic was set to be placed in the limiter queues.
    Limiters:
    00001:  18.779 Mbit/s    0 ms burst 0 
    q131073  50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
     sched 65537 type FIFO flags 0x0 0 buckets 0 active
    00002:  69.246 Mbit/s    0 ms burst 0 
    q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
     sched 65538 type FIFO flags 0x0 0 buckets 0 active
    
    
    Schedulers:
    00001:  18.779 Mbit/s    0 ms burst 0 
    q65537  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
     sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active
     FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 1024 NoECN
       Children flowsets: 1 
    BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
      0 ip           0.0.0.0/0             0.0.0.0/0     193      9082  0    0   0
    00002:  69.246 Mbit/s    0 ms burst 0 
    q65538  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
     sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active
     FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 1024 NoECN
       Children flowsets: 2 
      0 ip           0.0.0.0/0             0.0.0.0/0     51106 76002641 29 43500 1209
    
    
    Queues:
    q00001  50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail
    q00002  50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail
    

    0_1545244654861_Bandwidth.png

    The RTT for almost all of these Steam servers was under 8 ms.

    0_1545232588645_Steam_flows.png

    There were no drops associated with the SSH session I had from inside the LAN to outside the WAN.

    0_1545244440194_SSH_flow.png

    Here is a Steam download flow where you see many drops associated.

    0_1545244426065_Steam_limited_flow.png



  • I wouldnt worry over it too much.

    There is too many variables here, Operating System, Hardware used, RTT to download servers, capacity of connection, congestion provider used and so on.

    I originally had really bad issues which I detailed earlier in the thread, when I changed the hardware in my pfsense unit it was improved, then it became minor issues only on a few scenarios with steam been one of them, changing the masks so steam downloads have their own dynamic queue made it perfect for me, but because you cannot get the same issue yourself it doesnt mean is a problem, its fine everyone is happy with what they have here. :)

    I have also slightly decreased my pipe size a bit further so I can get perfect results on 32 threaded dslreports test as well. It seems it was slightly too high for that.



  • @uptownVagrant may I know what is this rule for? I tried to enable this rule only, but it only bypass the limiters. What's the difference between this rule #3 and rule #4

    3.) Add a match rule for incoming state flows so that they're placed into the FQ-CoDel in/out queues

    • Action: Match
    • Interface: WAN
    • Direction: in
    • Address Family: IPv4
    • Protocol: Any
    • Source: any
    • Destination: any
    • Description: WAN-In FQ-CoDel queue
    • Gateway: Default
    • In / Out pipe: fq_codel_in_q / fq_codel_out_q
    • Click Save


  • @knowbe4 Floating rule #3 is to match flows that are ingress to the firewall WAN port and a state is created - an example could be traffic destined for a port you are forwarding into your LAN. Floating rule #4 is to match flows that are egress, leaving the WAN port and a state is created.

    Rule #3 and #4 explicitly places flows into the limiter queues. If you have configured other floating rules, that are matching after the limiter rules, those may be bypassing the limiter.



  • Forgive me if this was covered here, I tried to read it all and got my fq-codel set up well thanks to this.

    However, traceroute will only show 1 hop on windows clients. From the pfsense router cli, it shows the proper hops.
    C:\WINDOWS\system32>tracert google.com

    Tracing route to google.com [172.217.3.110]
    over a maximum of 30 hops:

    1 <1 ms <1 ms <1 ms router.lan [192.168.1.1]
    2 1 ms 1 ms 1 ms sea09s17-in-f14.1e100.net [172.217.3.110]

    From router:
    traceroute google.com
    traceroute to google.com (172.217.3.110), 64 hops max, 40 byte packets
    (Hid first 4 hops to not give away location)
    5 72.14.208.130 (72.14.208.130) 5.587 ms 5.172 ms 3.809 ms
    6 * * *
    7 216.239.62.148 (216.239.62.148) 4.472 ms
    216.239.62.168 (216.239.62.168) 4.947 ms 4.957 ms
    8 209.85.253.189 (209.85.253.189) 2.687 ms 2.850 ms
    209.85.244.65 (209.85.244.65) 2.915 ms
    9 sea09s17-in-f14.1e100.net (172.217.3.110) 2.524 ms
    209.85.254.129 (209.85.254.129) 5.549 ms
    216.239.56.17 (216.239.56.17) 4.269 ms



  • @robnitro Take a look at the following guide as it should explain the issue you are witnessing and show how to workaround it - hint floating rule #1.

    https://forum.netgate.com/post/807490



  • Thanks, I did add that rule for both WAN and LAN.

    The only difference is that my setup has the rules based on LAN, because I have FIOS. The cable boxes video on demand VOD, can watch video above my 50mbit limit, they arent part of my QOS (by floating rule for the cable boxes IPs in an alias) because if they were, I would lose my 50mbit data max. Example: HD stream 16 mbit, I still can get 50mbit data. So the boxes are outside codel and/or HSFC.

    When I have limiters by WAN, using client IP's in floating rules to not be a part of the limiter doesn't work most of the time. I think it's because the traffic is technically coming into the router WAN ip address and leaving from the WAN ip address... not the local address which only exists on LAN communication.


Log in to reply