10Gbe Tuning?
-
Or în gui via system - advanced - miscellaneous.
Yes this I was meaning, by an Alix APU we where seeing the throughout gaining
from ~450 MBit/s to 650 - 750 MBit/s only by activating or changing the PowerD options. -
Sounds pretty nice !
-
Nikkon, this fits your use case: https://redmine.pfsense.org/issues/4821
Steve
-
Thx Steve ;)
-
It explains the low throughput in one direction but there's no solution to it as yet. Havne't tried the suggested patch but we are at least aware of it now.
Steve
-
Waiting for it ;)
Thx for the update. -
as an update on all my systems with igb i got
sysctl -a | grep '.igb..*x_pack'
dev.igb.0.queue0.tx_packets: 38931223
dev.igb.0.queue0.rx_packets: 42548203
dev.igb.1.queue0.tx_packets: 39439021
dev.igb.1.queue0.rx_packets: 36697705 -
I found another thread pointed to this circumstance where this would be really nice explained.
mbufs tunable
Also have a dedicated view on the named pfSense versions please. -
Nikkon, you only have one queue per NIC in each direction? You have a 4 core CPU I expect to see at least 2 queues. Do you have a tunable set to limit that?
Steve
-
dev.igb.0.queue0.no_desc_avail: 0
dev.igb.0.queue0.tx_packets: 7824580
dev.igb.0.queue0.rx_packets: 9484446
dev.igb.0.queue0.rx_bytes: 8615598781
dev.igb.0.queue0.lro_queued: 0
dev.igb.0.queue0.lro_flushed: 0–-------
dev.igb.1.queue0.no_desc_avail: 0
dev.igb.1.queue0.tx_packets: 9365166
dev.igb.1.queue0.rx_packets: 7891338
dev.igb.1.queue0.rx_bytes: 5772762364
dev.igb.1.queue0.lro_queued: 0
dev.igb.1.queue0.lro_flushed: 0I belive i need to add more.
Is there a recommanded value? -
sysctl -a | grep hw.igb.num_queues
hw.igb.num_queues: 4same result
-
I have seen copy speeds between 2 freebsd10 hosts via dd and mbuffer of 1115 MiB/s and actual file copy with zfs send and receive through mbuffer of 829 MiB (disk system limitation)
Could not get iperf to fill the link..That is with supermicro X9 mainbords and Xeon 1220 CPU via X520-DA2 nics. No tuning at all.
I tried a lot of tuning of parameters and it did not do much for me. But it might be different for Firewall usage.see: https://www.youtube.com/watch?v=mfOePFKekQI&feature=youtu.be
-
@Downloadski
I my opinion that is something like we talk about ZeroShell, IPCop, IPFire and SmoothWall and someone
is reporting the throughput between two Linux hosts. Can be that I am wrong with this, because pfsense
is based on FreeBSD, but as I read here in the forum there changes where done that it is not really more
matching the ordinary FreeBSD now. Correct me please if I am wrong with this. -
Well, these systems are standard freebsd 10 and 10GE works line rate on 1500 MTU without any tweaking. I tried a lot of buffer setting till i found out it did not matter much.
It was just finding the proper way to fill the interface up.I wrote this since i saw lots of posts with all kind of proposed changes.
If this is far off the subject any admin can remove it, no problems, just thought i could add something usefull. -
Enjoyed reading the discussion in this post!
I have two HP DL180 G6's with 32G RAM, 2 - dual port Chelsio cards. Two ports are for WAN side and in LACP (each port on different cards), two ports are for LAN side and in LACP (each port on different cards). Testing from one VLAN to another with IPerf3, I'm getting about 8Gb/s (udp) and about 4Gb/s (tcp). I've disabled Hardware TCP Segmentation Offloading and Hardware Large Receive Offloading. I'm sure there's a better way for me to do this and I feel that my TCP is quite low, does anyone have any recommendations?
Below are my configs:
System Tunables:
net.inet.ip.fastforwarding 1 net.inet.ip.redirect 0 net.inet.ip.intr_queue_maxlen 3000 kern.ipc.maxsockbuf 16777216 net.inet.tcp.sendbuf_max 16777216 net.inet.tcp.recvbuf_max 16777216 net.inet.tcp.sendbuf_inc 262144 net.inet.tcp.recvbuf_inc 262144 net.route.netisr_maxqlen 2048 net.inet.tcp.tso 0
/boot/loader.conf.local:
kern.cam.boot_delay=10000 kern.ipc.nmbclusters="1048576" net.inet.tcp.tso=0 hw.pci.enable_msix=0 kern.ipc.nmbjumbop="1048576" net.isr.bindthreads=0 net.isr.maxthreads=1 kern.random.sys.harvest.ethernet=0 kern.random.sys.harvest.point_to_point=0 kern.random.sys.harvest.interrupt=0 net.isr.defaultqlimit=2048 net.isr.maxqlimit=40960
-
Enjoyed reading the discussion in this post!
I have two HP DL180 G6's with 32G RAM, 2 - dual port Chelsio cards. Two ports are for WAN side and in LACP (each port on different cards), two ports are for LAN side and in LACP (each port on different cards). Testing from one VLAN to another with IPerf3, I'm getting about 8Gb/s (udp) and about 4Gb/s (tcp). I've disabled Hardware TCP Segmentation Offloading and Hardware Large Receive Offloading. I'm sure there's a better way for me to do this and I feel that my TCP is quite low, does anyone have any recommendations?
Below are my configs:
System Tunables:
net.inet.ip.fastforwarding 1 net.inet.ip.redirect 0 net.inet.ip.intr_queue_maxlen 3000 kern.ipc.maxsockbuf 16777216 net.inet.tcp.sendbuf_max 16777216 net.inet.tcp.recvbuf_max 16777216 net.inet.tcp.sendbuf_inc 262144 net.inet.tcp.recvbuf_inc 262144 net.route.netisr_maxqlen 2048 net.inet.tcp.tso 0
/boot/loader.conf.local:
kern.cam.boot_delay=10000 kern.ipc.nmbclusters="1048576" net.inet.tcp.tso=0 hw.pci.enable_msix=0 kern.ipc.nmbjumbop="1048576" net.isr.bindthreads=0 net.isr.maxthreads=1 kern.random.sys.harvest.ethernet=0 kern.random.sys.harvest.point_to_point=0 kern.random.sys.harvest.interrupt=0 net.isr.defaultqlimit=2048 net.isr.maxqlimit=40960
Do you thing you where able to use iPerf to render the full 10 GBit/s line?
-
I set the MTU on these to 9000 yesterday and 9000 on the iperf servers I'm using and was able to saturate (9.5Gb/s) the link. So I'm pretty sure I'm hitting just one interface. But going back down to default of 1500 MTU, knocks the speed down quite a bit. I think I've done everything correctly, but may have overlooked something quite obvious.
-
it's trivial to saturate a 10Gbps link between two machines, using much less hardware than you have, without changing the MTU.
-
I understand. But after following post after post and blog after blog, it just seems that my TCP performance test is a little low. Is this expected? I'm also shooting for over 10Gb/s with LACP, but I'm just not getting that. Not quite sure where I'm going wrong here.