10Gbe Tuning?
-
on my actual setup - J1900 + ssd + 8gb ram and 2xIntel i210T
For PPPoE i got verry low speed. Around 346.96 Mbit/s. connection is 1000 Mbit/s.I'm not expecting to get 1000 but @least 50%
dev.igb.0.%desc: Intel(R) PRO/1000 Network Connection version - 2.4.0 -
What CPU usage are you seeing?
Steve
-
-
How does that divide across the cores? I imagine you have one core at 100%
Steve
-
one core is reaching 100% yeah…rest of them are low in load.
-
one core is reaching 100% yeah…rest of them are low in load.
Are you able to adjust this over the PowerD function perhaps?
-
According to what I have found on the forum this is meant to be "adaptive"
Found that we can change the parameters by editing /etc/inc/system.Inc
Or în gui via system - advanced - miscellaneous. -
Or în gui via system - advanced - miscellaneous.
Yes this I was meaning, by an Alix APU we where seeing the throughout gaining
from ~450 MBit/s to 650 - 750 MBit/s only by activating or changing the PowerD options. -
Sounds pretty nice !
-
Nikkon, this fits your use case: https://redmine.pfsense.org/issues/4821
Steve
-
Thx Steve ;)
-
It explains the low throughput in one direction but there's no solution to it as yet. Havne't tried the suggested patch but we are at least aware of it now.
Steve
-
Waiting for it ;)
Thx for the update. -
as an update on all my systems with igb i got
sysctl -a | grep '.igb..*x_pack'
dev.igb.0.queue0.tx_packets: 38931223
dev.igb.0.queue0.rx_packets: 42548203
dev.igb.1.queue0.tx_packets: 39439021
dev.igb.1.queue0.rx_packets: 36697705 -
I found another thread pointed to this circumstance where this would be really nice explained.
mbufs tunable
Also have a dedicated view on the named pfSense versions please. -
Nikkon, you only have one queue per NIC in each direction? You have a 4 core CPU I expect to see at least 2 queues. Do you have a tunable set to limit that?
Steve
-
dev.igb.0.queue0.no_desc_avail: 0
dev.igb.0.queue0.tx_packets: 7824580
dev.igb.0.queue0.rx_packets: 9484446
dev.igb.0.queue0.rx_bytes: 8615598781
dev.igb.0.queue0.lro_queued: 0
dev.igb.0.queue0.lro_flushed: 0–-------
dev.igb.1.queue0.no_desc_avail: 0
dev.igb.1.queue0.tx_packets: 9365166
dev.igb.1.queue0.rx_packets: 7891338
dev.igb.1.queue0.rx_bytes: 5772762364
dev.igb.1.queue0.lro_queued: 0
dev.igb.1.queue0.lro_flushed: 0I belive i need to add more.
Is there a recommanded value? -
sysctl -a | grep hw.igb.num_queues
hw.igb.num_queues: 4same result
-
I have seen copy speeds between 2 freebsd10 hosts via dd and mbuffer of 1115 MiB/s and actual file copy with zfs send and receive through mbuffer of 829 MiB (disk system limitation)
Could not get iperf to fill the link..That is with supermicro X9 mainbords and Xeon 1220 CPU via X520-DA2 nics. No tuning at all.
I tried a lot of tuning of parameters and it did not do much for me. But it might be different for Firewall usage.see: https://www.youtube.com/watch?v=mfOePFKekQI&feature=youtu.be
-
@Downloadski
I my opinion that is something like we talk about ZeroShell, IPCop, IPFire and SmoothWall and someone
is reporting the throughput between two Linux hosts. Can be that I am wrong with this, because pfsense
is based on FreeBSD, but as I read here in the forum there changes where done that it is not really more
matching the ordinary FreeBSD now. Correct me please if I am wrong with this.