@madbrain said in Netflix buffering with 3 WANs:
Each instance of speedtest is not going to the same server, actually. The sticky connections really do break the load balancing for me. I'm down to 2 ISPs now (cancelled Comcast yesterday).
Well, what you are showing is that each instance is in fact going to one and only one server, and that using sticky connections does exactly what is expected... and that it's still working.
What I mean with server is the target machine which speedtest uses in the test.
Here you are running one test from the Windows machine which goes to one server : WiLine Networks - San Francisco, CA (id: 17587)
Example from my Windows machine alone without sticky connections :
Server: WiLine Networks - San Francisco, CA (id: 17587)
ISP: Verizon Wireless
Idle Latency: 9.99 ms (jitter: 0.01ms, low: 9.97ms, high: 9.99ms)
Download: 324.46 Mbps (data used: 448.5 MB)
103.45 ms (jitter: 34.71ms, low: 17.02ms, high: 350.77ms)
Upload: 30.59 Mbps (data used: 30.3 MB)
27.33 ms (jitter: 8.53ms, low: 10.49ms, high: 63.04ms)
Packet Loss: Not available.
Result URL: https://www.speedtest.net/result/c/d9138c04-a261-4a97-99f9-337aeaafbb5a
Your Linux box ended up using a different server : NETGEAR Inc. - San Jose, CA (id: 43447)
So two different sessions and different servers but only one server per session, agree?
Now, without sticky connections, the "streams" for each one of these tests, are going via both your ISP connections (WAN). Which proves to you that load balancing is working.
However great it is to see that you get better results in speedtest, the way this is working is not really the idea behind load balancing. Rather you are after better overall performance for the company as a whole, or household in your case. So even though the shared bandwidth is now the aggregate, you still expect each application or session to use only one connection at a time.
Splitting them up across two or more WAN's may even "break the connection"... Like you are experiencing with Netflix.
And to be honest, I don't know if a household could really benefit from load balancing all that much, unless you have quite low bandwidth per ISP. And you do a lot of downloading of application updates, game updates etc.
Now if you have two ISP's for the purpose of failover, you can still use them for loadbalancing of course, so there is absolutely no harm in doing it.
Anyway, when you turn on sticky connections, all traffic from a single session is expected to go out via one and only one of the WAN interfaces. Which you also tested and showed...
Now, if I turn on sticky connections, here is what happens :
Server: NETGEAR Inc. - San Jose, CA (id: 43447)
ISP: Sail Internet
Idle Latency: 9.94 ms (jitter: 1.82ms, low: 9.94ms, high: 17.22ms)
Download: 324.81 Mbps (data used: 579.1 MB)
99.00 ms (jitter: 25.55ms, low: 19.52ms, high: 186.42ms)
Upload: 40.70 Mbps (data used: 55.4 MB)
238.00 ms (jitter: 65.47ms, low: 14.98ms, high: 543.64ms)
In this case, the traffic graph showed everything went to Sail Internet, and nothing to Verizon.
And this is the expected outcome, since the whole idea with sticky connections is to keep each application or session going out only one of the WAN's, to avoid problems.
When you test both of them simultaneously, you say that you still see the traffic going out both WAN's, meaning that load balancing is in fact still working.
Now, both simultaneously :
Server: eero - San Jose, CA (id: 41818)
ISP: Sail Internet
Idle Latency: 9.95 ms (jitter: 0.06ms, low: 9.91ms, high: 10.09ms)
Download: 66.31 Mbps (data used: 39.9 MB)
25.81 ms (jitter: 9.77ms, low: 12.38ms, high: 91.09ms)
Upload: 32.49 Mbps (data used: 43.9 MB)
249.48 ms (jitter: 67.59ms, low: 77.49ms, high: 663.97ms)
Server: Verizon - San Francisco, CA (id: 29623)
ISP: Verizon Wireless
Idle Latency: 19.43 ms (jitter: 9.60ms, low: 15.29ms, high: 29.71ms)
Download: 51.55 Mbps (data used: 66.5 MB)
190.94 ms (jitter: 58.95ms, low: 13.09ms, high: 427.55ms)
Upload: 11.74 Mbps (data used: 12.2 MB)
284.39 ms (jitter: 71.24ms, low: 15.91ms, high: 811.42ms)
Traffic graph shows some traffic went to both ISPs.
This might not happen every time though...
Without sticky connections, the large number of "streams" that are set up by speedtest end up being distributed (round-robin) based on the interface weights that you set.
With sticky connections turned on, they are all getting "stuck" together per application/session and all of the streams from each individual speedtest session go out only one interface. And running only two sessions simultaneously, you will now and then end up with both of them going out the same interface (3:1 it's Verizon).
Which sort of shows the reduced benefit of loadbalancing in a household scenario with only a few simultaneous users... Since it's only with a large number of parallell sessions that you make full use of both WAN connections.
Why the combined bandwidth is much lower when using sticky connections is a bit of a mystery though. And perhaps someone else can explain that fenomenon, if it is always the case...