Low Latency and Low Throughput Network Config
-
I am looking to set up a low latency network by limiting bandwidth by ip address and connection. I was able to set this up in redhat using tc and was trying to find a way for this to work in pfsense. For example, I have a WAN connection with a capacity of 100/100. I'd like to be able to tell pfsense for every internal ip address rate limit it to 11mb/11mb (in theory, impossible to clog the pipe until there are over 9 users maxing out). For every new connection in the state table limit the bandwidth to 5mb/5mb. If a user is doing up to 2 transfers at max in this configuration they should retain a very low network latency. If they hit 3 @ max rate, they'll start to have latency issues.
I know you can try to retain a lower latency situation by attempting to identify and prioritize latency sensitive traffic. In my experience, it's not possible to effectively identify all traffic that's latency sensitive and absolutely nothing beats the latency of an unclogged pipe. Once you've gotten yourself into a situation where a queue is needed things aren't ideal.
What would be really cool is if pfsense could dynamically scale the per ip address limits as interface utilization changes and then it dynamically changed the per connection limit based as a factor of the per ip limit. This way you could have the best of both worlds – high throughput when available and always low latency. The holy grail!
-
Something like: https://forum.pfsense.org/index.php?topic=63531.0 ?
-
Thanks for the find, yeah that's close. I launched 4 client vms and a vm with pfsense to test. It looks like it uses queues – latency went up a bit when the pipe was saturated on clients that were not transferring files. On machines that were transferring files latency went way up. It was definitely a substantial improvement vs. not having it enabled! I tested a few separate times and the results were consistent. I didn't test it at a broader scale, but I'd be worried about those queues holding the line when the state table starts to scale upwards of 100k connections. Has anyone tested this with more than a handful of clients? (I didn't see anything to that effect in the thread you linked).
Doing this with a rate limit per ip and per connection would work better -- no impact on latency when a client attempts to saturate and if a client has one or two connections running full speed it won't kill their latency (albeit with a bigger hit to throughput). Any thoughts?
-
A large increase of latency is not an inherent characteristic of a saturated link, only a characteristic of a saturated link with too much buffer. You can use something like CoDel to limit buffer bloat to something more reasonable and it has a side-effect of causing streams to be mostly fairly balanced. That may be your 80/20 rule. If you need even more control and if you have a limited number of clients, you could use HFSC, but limiters seem to be easier for most people to grasp.
Even with limiters, give CoDel a try.