Limiter bandwidth drops if I add delay
-
Could be your TCP stacks or negotiation. Depending on how you're measuring the bandwidth, a TCP connection with a 64KiB window and 200ms of latency would have a max bandwidth of ~2.6Mb/s. If you test opened 10 TCP connections, you would see about 26Mb/s.
Most TCP stacks support large windows beyond 64KiB. If you're using something like SYN proxy, then this will be broken and limited to 64KiB.
-
I`m using several bandwidth monitoring tools like Speedtest.net (including their Windows Store app) and they all show pretty much the same results. There no other special configuration done to the network - pfSense is the router (I have not configured anything regarding TCP stack/negotiation/window) and behind it are the devices that need to be limited.
Would configuring the Advanced Options for limiters (Queue size (slots) & Bucket size (slots)) help with this in any way? Or is there anything else I can configure or a better way to test the bandwidth and delay / packet loss limitations?
The thing is that I`d like to simulate for example a connection with 10 Mb upload/download, 200 ms delay and 10% packet loss, but anyway I configure that I do not get more than 1Mb.
-
@raurelian said in Limiter bandwidth drops if I add delay:
However, when I throw delay (latency) or packet loss in the mix, the bandwidth drops heavily.
That's how TCP works.
I use this on occasion to see if what is being seen is anywhere close to what is theoretically possible:
https://www.switch.ch/network/tools/tcp_throughput/
-
OK, thank you very much, that is informative and a good way to check if the connection is reaching the maximum potential.
However, then I understand that there is nothing I can do from pfSense itself to increase bandwidth when there is a delay and packet loss set, since pfSense will not mess with the TCP buffer size for connections passing through it - is that correct?
-
Right. The buffer size is negotiated between the endpoints.
You can clamp MSS but that is more about working around a packet size limitation somewhere in the path.
-
OK, got it, thank you for your help!
-
You might also be served well by testing locally using iperf3 or maybe even TRex so you know you are not looking at internet inconsistencies.
-
Thanks, I`ll definitely give at least iperf a try.
-
Just for clarification, lower bandwidth because of higher latency is not fundamental to TCP, but is a very common issue with many TCP stacks. In theory, the TCP large windows are supposed to prevent this, but in practice, I also notice upper "limits" with large RTTs. SYN proxy breaks TCP large window negotiations because the proxy can't know if the client will support it and the SYN proxy happens before interacting with the client.
Newer TCP implementations that are being rolled out in datacenters, but not quite ready for primetime for a worldwide rollout, are virtually unphased by higher terrestrial RTTs and mild loss. One such example is from Youtube and Google is called BRR, and BRRv2 is right around the corner.
I am not sure of Microsoft has plans for when they'll update their TCP, but I would hope that pretty much everyone will be using a "modern" TCP stack in the next 10 years.
-
There are theoretical TCP throughput limits based on RTT and window size.
That calculator does a pretty good job at estimating what can be an expected maximum.