Current recommendation for traffic shaping on XG-1541 with ix driver?

  • Using a Netgate XG-1541 (2-igb, 2-ix interfaces). Since the underlying FreeBSD altq(4) does not support the ix driver (in pfsense 2.4.4-p3/FreeBSD 11.2), nor will it as of FreeBSD 12.1 (most recent supported release as of this post), what is the current recommendation for setting up traffic shaping?

    I've seen suggestions to put everything on VLANs, and traffic shape with altq on VLANs, tweaking driver settings, as well using limiters, but many of the posts are 1+ years old, so I thought I would ask in case there is a current best practice.

    Referring to a current (< 1 year old?) post would be great, as I didn't find a recent one here. Or an external blog post (I've seen several with various recommendations)

    The environment is a SOHO with a cable connection limited to 50 Mbps down/5 Mbps up, IPv4 only. The primary concern is increasing priority for VOIP and video conferencing, which sometimes gets choppy when other bandwidth intensive tasks are running.

    As a related question, I didn't see it spelled out in the man pages or elsewhere online - what happens when an interface is excluded from ALTQ? Does it get first priority for full/unthrottled bandwidth, or does it get "whatever is left" after ALTQ-enabled interfaces take priority?

    For example, if VLAN99 (ALTQ + VLAN interface on physical ix1 interface) and LAN (physical ix0 interface) both try to saturate the WAN upload/download capacity, what is the expected outcome?

    1. ALTQ + VLAN99 wins and gets [nearly] all the bandwidth
    2. LAN wins and gets [nearly] all the bandwidth
    3. VLAN99 and LAN each get about 50% of the bandwidth
    4. something else?

    Note - I previously ran ALTQ with PRIQ on earlier Netgate hardware (C2758), which is what I would have preferred to configure on the current firewall if the ix driver was supported.

  • LAYER 8 Netgate

    If I absolutely had to use shaping on an ix interface I would put it on a VLAN and do whatever you did on the C2758 that worked for you.

  • Thanks! ALTQ / PRIQ worked well on the C2758 for proritizing on the WAN, so I'll plan to stick with that.

  • You could also give fq-codel a go, as it is pretty much made for what you want, and does so with minimal setup and you wouldn't have to mess around with vlan to get basic qos.
    You can find a guide here, i would leave flows at default and lower limit to around a 1000 and also use codel as queue management algorithm in the limiter but not in the queue. I am running it on a 120/120 Mb/s line with a xeon d-1521 supermicro board and it does a pretty decent job at keeping latency low in my case :)

  • @bobbenheim Thanks - I have just read through that thread and looked into the fq_codel overview at and that looks like a good option in my scenario. I had not previously used limiters. It looks fq_codel like could also improve other buffer/burst-sensitive traffic (e.g. online video games) on the home LAN segment of the network that is unrelated to my primary VOIP issue.

  • @PVuchetich2 Thanks again - I set up fq_codel using those instructions, and an initial test (using dslreports speed test) suggests that "bufferbloat" is greatly improved ["C" to "A+"]. Hopefully that translates to a noticeable difference. Overall speed may be a little lower than without limiter (pre-config measured speed was 56.8 down/5.24 up, limiter set to 50/5, new measure was 47.7/4.9), but that is likely how the limiter is intended to work by controlling the flow before the cable modem provider throttling is triggered, and is estimated based on a very limited sample (3 pre / 3 post test runs).

    At first I thought the ICMP traceroute rule wasn't working until I checked that traceroute from my workstation to the internet was using UDP by default (FreeBSD traceroute(8)), but the rule worked just fine with 'traceroute -I' from the workstation to force ICMP.

    It looks like this would likely be a good solution - I'll just need to validate that the original symptoms are gone when I saturate the bandwidth (easy enough with the cable modem). There may be some minor tweaks to improve settings, which would require a bit more reading and testing.

  • @PVuchetich2 i would imagine you could easily set your dl limit at 55 Mbps and obtain the same result, YMMV of course but it's still ten percent more dl bandwidth which could be nice to have at peak utilization.

  • Hello, I've run into a similar brick wall.
    I have a number of Netgate XG7100s which I need to set up traffic shaping on, but when I run the wizard, I'm met with "This firewall does not have any LAN-type interfaces assigned that are capable of using ALTQ traffic shaping."

    Both here, and in the XG7100 manual, we are told that it will work when set up on a vlan interface. Thing is, both the LAN and WAN on the XG7100 appear to already be vlan interfaces, since it uses a switch on chip
    WAN is vlan 4090 on lagg0
    LAN is vlan 4091 on lagg0

    What can I do here?

  • @lukeskyscraper you can set up fq-codel with limiters which does not rely on altq, you can find a guide here. I would suggest using CoDel as queue management algorithm in the limiter and tail drop in the queue.

  • @lukeskyscraper I ended up using the limiter on WAN (no need to set limiters on LAN side in my scenario because it is rarely saturated) instead of ALTQ because it was easier than setting up more VLANS. Another post has a link to the detailed settings, but my basic setting were: Schedule = "fq_codel" and Queue Management Algorithm = "Tail Drop". The upload and download rate were set (and tinkered with) starting at 90% of measured upload/download speed, and adjusted up to about 98% of the measured rate.

    Other fq_codel settings:
    target = 5
    interval = 100
    quantum = 300
    limit = 20480
    flows = 60000

    The "Quantum = 300" is based on the following:

    "We have generally settled on a quantum of 300 for usage below 100mbit as this is a good compromise between SFQ and pure DRR behavior that gives smaller packets a boost over larger ones."

    It seems some recommendations are to lower the limit value based on my link speed (50 Mbps down/5 Mbps up), but I haven't seen issues that would suggest that tinkering is needed yet.

    After 12 days of this setup, I have not had issues with VOIP quality (based on my qualitative experience with calls). The VOIP device shows self-reported max jitter of 12.31 ms on one call, and an average of 1.65 ms on the 10 most recent calls, with 0.26% packets lost on received packets, and an average latency of 105.5 ms. These call stats are shown because they are conveniently reported by the device, and are not made scientifically or in any reproducible manner in case you are looking at similar statistics.

    If I have perceived issues with quality, I would then take the time to measure my VOIP packet sizes (to see if quantum needs to be increased), saturate the bandwidth (e.g. initiate multiple downloads/uploads to saturate the WAN connection), then initiate a call from VOIP to my cell phone to collect statistics.

  • @PVuchetich2 also states that a limit value of 10240 is overkill even for gigabit ethernet and lower values could be beneficial especially with lower speed links.
    Tried setting flows at 204780 some time ago and it made my system, which is a similar Xeon d1500 series setup like yours, reboot at random when using the gui. Is that something which you have experienced?

  • I haven't had any reboots, although it has been less than 2 weeks. Bufferbloat does indicate that relatively high limit values will delay kicking in the limiter, and calls out cable modem setups in particular because of their relatively low bandwidth, which may run into provider-determine limits before fq_codel kicks in. That is one of the settings I may need to tweak if I see performance issues on VOIP under load.

    On my SOHO network, I don't think I'll ever hit the limit on "flows". Is "flows at 204780" a typo in that message? The link you shared earlier for fq_codel states that "65535" is the max setting without running into a boot loop. I didn't see a max on bufferbloat, so this may be a pfsense or FreeBSD implementation limitation. My understanding is that it creates a hash table in memory to separate out connections, so it only needs to be big enough to separate out traffic into enough separate flows to make sure each flow gets its fair share.

  • @PVuchetich2 that is a typo, i meant 20480 :)

    Update: tried upping value of flow and am not seeing any stability issues, though i am on 2.4.5 versus 2.4.4 p3 when i last increased flows. Also setting flows to 40960 gave a couple of ms less on upload bufferbloat in dslreport's speedtest. Download seems to be the same but cpu usage is increased by it.

Log in to reply