UDP performance Issue with SG-1000
-
Hi,
I am doing some test with the SG-1000 and i have network packet lost when testing with IPerf… Everything is fine with TCP but i have errors with UDP even with a tiny small packet... I have a wireshark trace that show the lost passing throught the SG-1000 versus a port mirror not going through the Firewall to compare....
Is there some optimization possible or more testing i could do. Is somebody allready expericens some issues with UDP Packet with the SG-1000 ?
I joined 2 screenshots...
Thx in advance..
-
iperf3, no firewall:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-30.00 sec 1.19 GBytes 341 Mbits/sec 500 sender
[ 4] 0.00-30.00 sec 1.19 GBytes 341 Mbits/sec receiveripfw: (single rule)
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-30.00 sec 1.01 GBytes 291 Mbits/sec 381 sender
[ 4] 0.00-30.00 sec 1.01 GBytes 290 Mbits/sec receiverand pf: (single rule)
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-30.00 sec 731 MBytes 204 Mbits/sec 361 sender
[ 4] 0.00-30.00 sec 730 MBytes 204 Mbits/sec receiverpkt-gen (large packets), no firewall ~580Mb/s:
313.994151 main_thread [2019] 45.851 Kpps (48.763 Kpkts 585.156 Mbps in 1063505 usec) 5.47 avg_batch 1015 min_space
315.057147 main_thread [2019] 45.838 Kpps (48.726 Kpkts 584.712 Mbps in 1062996 usec) 5.45 avg_batch 1015 min_space
316.092160 main_thread [2019] 45.854 Kpps (47.459 Kpkts 569.508 Mbps in 1035013 usec) 5.47 avg_batch 1015 min_space
317.116221 main_thread [2019] 45.838 Kpps (46.941 Kpkts 563.292 Mbps in 1024062 usec) 5.48 avg_batch 1015 min_space
318.140208 main_thread [2019] 45.846 Kpps (46.946 Kpkts 563.352 Mbps in 1023987 usec) 5.44 avg_batch 1015 min_space
319.203146 main_thread [2019] 45.831 Kpps (48.715 Kpkts 584.580 Mbps in 1062937 usec) 5.45 avg_batch 1015 min_space
320.266145 main_thread [2019] 45.827 Kpps (48.714 Kpkts 584.568 Mbps in 1063000 usec) 5.47 avg_batch 1015 min_space
321.329146 main_thread [2019] 45.842 Kpps (48.730 Kpkts 584.760 Mbps in 1063001 usec) 5.43 avg_batch 1015 min_space
322.392147 main_thread [2019] 45.845 Kpps (48.733 Kpkts 584.796 Mbps in 1063000 usec) 5.48 avg_batch 1015 min_space
323.455147 main_thread [2019] 45.850 Kpps (48.739 Kpkts 584.868 Mbps in 1063000 usec) 5.46 avg_batch 1015 min_space
324.509646 main_thread [2019] 45.850 Kpps (48.349 Kpkts 580.188 Mbps in 1054500 usec) 5.45 avg_batch 1015 min_spacewith pf (single rule):
498.389631 main_thread [2019] 27.494 Kpps (27.549 Kpkts 330.588 Mbps in 1002000 usec) 3.42 avg_batch 1019 min_space
499.391631 main_thread [2019] 27.513 Kpps (27.568 Kpkts 330.816 Mbps in 1002000 usec) 3.45 avg_batch 1019 min_space
500.393640 main_thread [2019] 27.503 Kpps (27.558 Kpkts 330.696 Mbps in 1002008 usec) 3.46 avg_batch 1019 min_space
501.419083 main_thread [2019] 27.502 Kpps (28.202 Kpkts 338.424 Mbps in 1025443 usec) 3.44 avg_batch 1019 min_space
502.419632 main_thread [2019] 27.509 Kpps (27.524 Kpkts 330.288 Mbps in 1000549 usec) 3.44 avg_batch 1019 min_space
503.420638 main_thread [2019] 27.545 Kpps (27.573 Kpkts 330.876 Mbps in 1001006 usec) 3.45 avg_batch 1019 min_space
504.430635 main_thread [2019] 27.530 Kpps (27.805 Kpkts 333.660 Mbps in 1009998 usec) 3.44 avg_batch 1019 min_spaceand with ipfw (single rule):
597.124126 main_thread [2019] 37.585 Kpps (39.953 Kpkts 479.436 Mbps in 1062999 usec) 4.61 avg_batch 1017 min_space
598.186628 main_thread [2019] 37.587 Kpps (39.936 Kpkts 479.232 Mbps in 1062502 usec) 4.60 avg_batch 1017 min_space
599.250127 main_thread [2019] 37.589 Kpps (39.976 Kpkts 479.712 Mbps in 1063500 usec) 4.60 avg_batch 1017 min_space
600.251626 main_thread [2019] 37.583 Kpps (37.639 Kpkts 451.668 Mbps in 1001498 usec) 4.62 avg_batch 1017 min_space
601.313294 main_thread [2019] 37.573 Kpps (39.890 Kpkts 478.680 Mbps in 1061669 usec) 4.60 avg_batch 1017 min_space
602.359629 main_thread [2019] 37.600 Kpps (39.342 Kpkts 472.104 Mbps in 1046334 usec) 4.63 avg_batch 1017 min_space -
Thx for you answer, so i see that it works on your side. Were you using UDP Packets… My PFSense as only one rule (Any-Any) and is a NAT with some port forwarding... Do you think that could be the problem ?
Gw