Proper way to implement fq_codel on basic limiters for 2 LANs?



  • I have a guest Network that I've set basic limiters on and I also want to limit the LAN so fq_codel on pfSense will queue my traffic instead of my ISP.

    I've limited LAN to 95% of my WAN (150/10), and limited Guest to 10% of that.

    I have four limiters set, upload and download for LAN and guest.
    Download limiters are set to mask destination, upload to mask source.

    I've set all pass rules on each interface to use their respective limiters.

    I then set the type on all 4 limiters to fq_codel.


    I would prefer to give weights to differentiate the two networks so that LAN traffic takes priority over Guest. I cant figure out how to do this because the parent rules have options for listing bandwidth but no option for setting a weight, while the child rules can set a weight but not bandwidth limits.


    On Guest it works perfectly, speeds are limited to almost exactly what I set them at and bufferbloat goes up to an A on dslreports.

    On LAN however, while bufferbloat is decreased a significant amount of bandwidth is lost. I lose between 30-50Mbps.

    Is there anything that I can do to prevent this?
    Is this configured properly for what I am trying to accomplish? If not then what is the correct way to do it?



  • I changed it to setting two limiters - Up & Down, set to 95% of my WAN.

    Under each I put two queue's - LAN & Guest, LAN weighted @ 90, Guest weighted at 10.

    This works much better, bufferbloat on download is ~<5ms over real time (A+) under laod with both networks competing for multiple types of traffic.

    On upload it's great until the connection is under full load, then bufferbloat gets into 100ms+ range. I'm not sure why this is?

    I'm still seeing about 10Mbps loss of download off the top (10Mbps under the 95% limit I set). But this way is much better than before.

    Any clues as to how to further tweak this to get rid of that upload bufferbloat and download bandwidth loss?



  • You shouldn't need to tweak anything to achieve the bandwidth you configured. If you input 95Mbit then you should get 95Mbit, otherwise there's a problem with pfSense or your configuration.



  • Where can I look for a problem in pfSense or my configuration that would be related to this?

    My pfSense box has been running great other than not getting the full bandwidth I configured.

    I'm also not the only person running into this, I've seen a couple other posts on the same topic but no resolution.



  • Another question, what is the proper mask for source and destination networks for this?

    I selected /24 because that's my end - but am unsure if that's correct.

    I've seen several posts of people sharing their "ipfw sched/queue show" and they are slecting /128 I believe, why?



  • @Nullity:

    You shouldn't need to tweak anything to achieve the bandwidth you configured. If you input 95Mbit then you should get 95Mbit, otherwise there's a problem with pfSense or your configuration.

    No, sorry, it's not working that way. Depending on traffic shaper type you will get 2-6% loss of configured bandwidth. This is by design and you can't override it, because if you increase bandwidth limit, bufferbloat goes to ISP side. 
    Furthermore, if you are using both altq shaper and limiters [dummynet] then you will get as much as 10% bandwidth loss or even more. The same situation appears if you apply several limiters to the same traffic.



  • @w0w:

    @Nullity:

    You shouldn't need to tweak anything to achieve the bandwidth you configured. If you input 95Mbit then you should get 95Mbit, otherwise there's a problem with pfSense or your configuration.

    No, sorry, it's not working that way. Depending on traffic shaper type you will get 2-6% loss of configured bandwidth. This is by design and you can't override it, because if you increase bandwidth limit, bufferbloat goes to ISP side. 
    Furthermore, if you are using both altq shaper and limiters [dummynet] then you will get as much as 10% bandwidth loss or even more. The same situation appears if you apply several limiters to the same traffic.

    It depends on a lot of things, like which transmission rate we are measuring (throughput or goodput), network latency, buffer sizes, etc, etc. My point was that if you set up a 95Mbit limiter/queue, it will actually transmit at that rate (unless there's a major problem with ALTQ/dummynet or the config). The "problems" lie elsewhere; overhead, latency, innaccurate expectations, etc.

    You're kinda right though, when measuring goodput you will see a max speed a few % lower than the configured bitrate limit on the interface/queue/limiter, especially on downloads (because of the higher latency).

    @belt9
    During a fully saturating download do your throughput graphs show a consistent maximum rate or does it constantly fluctuate?

    You might read this: http://www.linksysinfo.org/index.php?threads/qos-tutorial.68795/
    It is my favorite QoS intro. It does a goid job explaining the fundamentals and most of the important factors that impact QoS.



  • Thanks I'll read that!

    I'm only using dummynet
    I'll try to get some flent rrul output up here tonight.



  • Here's my output from dslreports & flent rrul.

    rrul is a tough test!

    I limited my bandwidth on dummynet down to 85% from 95% per the linked thread, that seems a little ambitious but it makes sense for times when the ISP isn't getting me advertised speeds on the line - which would be the times when I most wanted fq_codel working for me.

    I'd be interested in seeing others results in limiting bufferbloat!

    On my tests all of the traffic was routed over a VPN.

    Flent RRUL was run to the public netperf server```
    netperf.bufferbloat.net

    
    Does anyone know of other public servers for netperf?
    
    ![first_dslreports3.png](/public/_imported_attachments_/1/first_dslreports3.png)
    ![first_dslreports3.png_thumb](/public/_imported_attachments_/1/first_dslreports3.png_thumb)
    ![first_rrul3.png](/public/_imported_attachments_/1/first_rrul3.png)
    ![first_rrul3.png_thumb](/public/_imported_attachments_/1/first_rrul3.png_thumb)


  • Here's some more RRUL & DSLReports output using fq_codel without the VPN variable.

    The DSLReports output and the last two pictures are over wifi, an old crappy Intel 6205 Advanced-N card. I had to limit the dummynet down to 40Mbps to get fq_codel to capture this slow card. I made an alias for all of my slow wifi devices and made a firewall rule to pass their traffic with the Slower dummynet pipe.

    I am very pleased with the wifi performance, RRUL tests without fq_codel were averaging in the 3-5000ms range, often spiking into the 8000ms range and sometimes more. I tried adjusting txqueuelen and setting SFQ instead of pfifo_fast on the AP (Ubiquiti AP AC Pro) but it didn't improve performance much. Simply setting fq_codel to handle it on pfSense dramatically improved wifi.





    ![network being used - slow wifi.png](/public/imported_attachments/1/network being used - slow wifi.png)
    ![network being used - slow wifi.png_thumb](/public/imported_attachments/1/network being used - slow wifi.png_thumb)
    ![network unused - slow wifi.png](/public/imported_attachments/1/network unused - slow wifi.png)
    ![network unused - slow wifi.png_thumb](/public/imported_attachments/1/network unused - slow wifi.png_thumb)