Simple PRIQ Setup Killing Max Upload Speed?

  • I'm trying to use a simple PRIQ shaper to prioritize ACKs over other traffic. I used the the wizard to setup a multi-LAN, single-WAN PRIQ setup, and set the PRIQ WAN bandwidth limit to my ISP's upload bandwidth of 12 Mbps. (We'll ignore the download bandwidth for now as that is less of my immediate concern.) I also used most of the default PRIQ priority settings: e.g. ACKs get top priority, most things, including HTTP traffic, get shunted into the default priority in the middle, Bittorrent traffic gets a lower priority, etc.

    Now, whenever I try to do an HTTP upload, even when no other upload or download traffic is present, my upload speeds top out around 7 Mbps, despite the WAN PRIQ bandwidth limit being set to 12 Mbps. Is there a reason the PRIQ traffic shaper is wasting ~40% of my upload bandwidth even when no other higher priority traffic is present? Is there any way to fix this?

    I don't mind lower the upload bandwidth when higher priority traffic is being passed as well (after all, that's kind of the point), but the fact that I'm taking such a large bandwidth hit when no higher priority traffic is present seems like an issue.

    I will note that when I look at the traffic flowing into of the LAN interface, it does appear to be closer to 11 Mbps, but when that same traffic passes out of the WAN interface, it get shaped to 7 Mbps. Presumably the PRIQ shaper is dropping the difference (e.g. about 4-5 Mbps worth of traffic). But why does it do this when there is plenty of bandwidth overhead to spare and no higher priority traffic?

    If I arbitrarily raise the PRIQ bandwidth limit to 20 Mbps or so, I can hit my actual upload speed of 12 Mbps, but that seems to essentially defeat the purpose of traffic shaping.

    Thoughts of why a simple PRIQ shaper would waste so much bandwidth in the absence of higher priority traffic or how I might fix this issue? I want to do some basic traffic shaping, but not at the expense of 40% of my upload bandwidth.

    Screenshots of the PRIQ config attached. I'm running pfSense 2.3.1-RELEASE-p5 on at Atom C2758 board with 8 GB of RAM.

  • Did you try increasing your default queue sizes? They default to 50, which is quite small, especially for what seems to be extremely bursty low quality last-mile connections. You can look at the drop statistics to see if the queues are even dropping packets or if the issue lies somewhere else. Otherwise get out the packet sniffer and start comparing packets coming in on the WAN to going out on the LAN.

    You also need to be careful about ECN. There are many bufferbloated hops on the Internet that treat ECN all kinds of wrong and can hurt performance.

  • As far as I can tell from the queue stats, I'm not hitting the queue length limit. I am however, dropping packets on WAN (which makes sense, given the input vs output bandwidth discrepancy).

    Traffic graphs and stats attached.

  • You have enough bandwidth, try checking the "Codel Active Queue" box and possibly uncheck ECN, even though in theory it should be superior.

  • I have found RED and ECN to both be detrimental to my speeds. Things got better when they were disabled.

  • Yeah, firewalls and routers on the Internet completely abuse ECN.

  • I disabled ECN and enabled CODEL. It had a minor effect, but not much. I'm still topping out around 8 Mbps, even though queue limit is set to 12 Mbps. When I disable shaping all together, I hit 12 Mbps right away.

    I did notice something a bit interesting though. The queue status screen shows the measured bandwidth as being around 12 Mbps (attached). Yet the traffic graph and the test program itself both show ~8 Mbps Mbps (also attached). Is this discrepancy normal? Or is it possible there's something wrong with the shapers bandwidth measurement where it thinks it's sending at 12 Mbps and is thus slowing down the pipe even though it's actually only sending at 8 Mbps?

    Thanks for the thoughts. They're appreciated.

  • So with a bit more testing, turning off CODEL (along with all the other scheduler options) gets me the best bandwidth: between 11 and 12 Mbps.

    Doing so, however, seems to badly break the traffic graphs. They now display radically lower throughput than I'm actually getting. See attached graph showing ~4 Mbps when I'm actually getting a fairly steady 11+ Mbps up. Maybe related to Interesting that the issue is most pronounced when no special scheduler options are enabled.

Log in to reply