Limiter for bufferbloat still has latency / jitter
My connection is 330 Mbps / 33 Mbps
My goal is to remove buffer bloat while gaming. The problem is no matter what I try there’s always added latency.
On the waveform bufferbloat test site, I can’t score past A , +7 to +9ms download. I can confirm this running a speed test & pinging. It’ll jump 10ms~. It’s also there in a game (jitter).
I’ve tried the official net gate guide. I’ve tried the lawerence system YouTube guide (which is similar to the sans.edu guide).
I’m doing pfsense on proxmox (modest hardware but should be good enough, 4x 1.5 ghz intel celeron (boosts higher) with 4gb memory.)
I’ve tried with PCI pass through my intel i340 nic, tried different ports, different tunables, hyperthreading off, HW offloading settings. I’ve done a reset to my pfsense too.
Trying all of this I’ve gotten a A+ but it won’t be consistent or last. And in real world I’ll get rubberbanding in game if network is under load. Or sometimes I’ll get minor packet loss.
Out of my testing but limited knowledge it seems the queue can’t get processed fast enough so latency is introduced. However CPU load is minimal when I’m monitoring it (top aSH).
Evidence for this is if I set my download limiter to 150 Mbps, it performs way better than 250 Mbps. Upload limiter works fine since the band width is small.
However capping my download to 150 isn’t ideal. The best solution I’ve found is have upload limiter on wan (never an issue since it’s so small), and 2 limiters on LAN side (one for my gaming pc, the other for everything else). This makes it so gaming traffic isn’t stuck in a queue behind other things. Of course this isn’t a perfect solution…
Is this just how it is? I’m guessing for 99% of applications scoring a “A” on the buffer bloat test with +7-9 ms ~ isn’t going to matter or be noticeable to some of you. But can pfsense do better here and I’m just missing something?
@hc1900 you understand that once you start to queue packets for transmission, you always will see added latency, and more than likely bursts of jitter. The added latency will depend on the length of the queue, and the time to process the queue
If a packet has to wait in a queue, that fact of "waiting" will add to the latency. Jitter is the change in response time. Packet waits in queue vs doesn't wait in queue would cause this change.
Think of it this way if you go into the store and there is no line to check out, do you get too the checkout faster. If there is a line you have to wait, so it takes you longer to get to the checkout.
If you measure the time for a person to get to the checkout, and then some guy takes longer to checkout and a line starts to form, or just more people want to use the checkout and a line forms, it now changes the time to get to checkout depending on the queue length changing.. This change in time will cause jitter.
If you want to min latency and jitter you would use qos where you let that traffic you want to have lower latency and lower jitter by cut in line in the queue.
You would look to say fq_codel on pfsense, you could look for a "modem" that does SQM/AQM, etc.
This help by working the queue so specific flows, AQM stands for active queue management while SQM is smart queue management
@johnpoz thanks for the detailed response, appreciate it. I get it’ll always have latency but was hoping for something not so high. That is a ‘set and forget’ approaching following one of the guides I mentioned would still produce rubber banding in a game under network load.
I’ll look into QoS the specific traffic, but for now do you or anyone have any advice on how to best setup the limiters and which settings can get packets off the queue faster?
I have a 290 Mbps download limiter on WAN “main queue” and a 225 Mbps download limiter on a VLAN where all the other non-gaming traffic happens to not over queue the main one. This is out of a 330 Mbps connection. Is there a better approach? Since this introduces some spikes too but it’s so far the best I’ve tried.
I have HW offload checksum and TSO ENABLED since I’m thinking it should be faster than relying on CPU? But no real numbers to back this.
Also hyperthreading off from what I’ve read but again nothing to back this up.
Anything else to tweak to speed up the queues? Or a better way to setup the limiters?
Confused about this one thing, with a 290 main queue and a 225 queue for everything else, why do I still get latency? Even if the 225 is saturated, there should be enough bandwidth where the main queue isn’t jammed no?
Or another way to put it, shouldn’t the queue only take into effect if the threshold is exceeded? I mean that literally and not literally, I don’t know how it technically works, but I would assume the queue would flow naturally, if I have 10 checkout lanes in a supermarket, and I have 8 people waiting, that shouldn’t cause a jam?