How to load-test UDP traffic
-
How would I use iPerf to debug traffic shaping on WAN?
-
Well you would need something on the wan or past the wan running iperf.. I would validate you can actually do more than what your wanting to shape it too.. Then turn on your shaping and see if it shapes to what you want.. Iperf is going to try and send at the rate you set..
So if your trying to send at 100, and your shaping to 50 you should see it limited to 50..
There are some public iperf servers you can test with... But how much bandwidth they can do, not sure - but google public iperf servers and you should find a few you could try..
Better would be to put your own box running iperf on your wan network.
-
Thanks a lot @johnpoz! I used one of these servers which was sufficient for now to pinpoint the issue!
-
Would you like to share what you found? Might help the next guy..
-
Sorry, didn't want to go off-topic too much.
I got latency spikes and packet loss at least due to high cpu usage.
Will have to wait for at least a week to verify my changes under normal load.- I used more queues than neccessary which seems to influence cpu usage. Trying to reuse queues where appropriate
- MTU set on interfacelevel has a big influence, where it being (much) smaller than neccessary can kill the connection almost instantly under synthetic load.
- Took a look at Tunables where especially putting IP Input Queue (intr_queue) on 10.000 and disabling TSO/LRO in loader.conf drastically improved latency and package loss under synthetic load.
-
@discy said in How to load-test UDP traffic:
where it being (much) smaller than neccessary
You mean you have mtu set on the interface, like 576 or something vs default 1500? Yeah that could cause cpu to get used ;) Because the router has to fragment all the stuff bigger than the mtu down to the mtu.
-
@johnpoz Good to know I wasn't imagining it;). 1400 already seemed like a noticable difference compared to 1500, indeed getting quickly worse as MTU was lowered. Unfortunately I can't provide hard numbers for solely changing MTU as CPU usage varied a lot when I started investigating MTU as a crulpit. It was definitely not the only cause of my issues.
In the end monitoring package loss and latency has dropped from 1300ms+/50%+ to 120ms/3% under 100mbps iPerf3 UDP load.
-
Why was the mtu not default? Just curious...
This video? I would think yeah that going to be sending large packets.. I would think..
-
Been dealing with this issue for some weeks now, trying all kinds of stuff. Used ping -f to determine that 1472 would be the correct value. Currently back at default - probably as it should be.
This video? I would think yeah that going to be sending large packets.. I would think..
Yes. Will wait and see what happens when people start working again.
-
Yeah the only reason you should change from the default 1500 would be some very very specific reasons, and doing so for sure would come with it's own set of concerns..
I see this quite often with customers thinking they should set jumbo, with no thought into why... They just think it will be better... Sorry but NO!! its a PITA for no actual gain.. If some sort of san only connection and your moving very large chunks of data.. Ok.. sure, but its pretty much a waste of time and effort... But you sure and the hell don't make your whole lan try and do 9k jumbo frames ;)
-
@discy said in How to load-test UDP traffic:
1472 would be the correct value
That is not what the MTU would be set to on the interface.. The MTU on the interface should be 1500.. 1472 would be what would pass after overhead.
-
@johnpoz Thanks. That proves your point about the missinformation out there and why it's a good thing that it's back on default.
Will give an update if my issues remain after what I did before and/or if I gathered more insights.
Vincent