10Gbe Tuning?
-
What fiber sfp+ modules are you guys using for the intel x520?
I use mostly Direct Attach cables, not optics, but when I do use fiber it's either genuine Intel modules or knockoffs from approvedoptics.com.
Nice, regular 8431 cables like http://www.ebay.com/itm/Intel-XDACBL5M-Twinaxial-Netwrk-Cable-Twinaxial-Netwrk-16-40ft-SFF-8431-Male-/311114486894?pt=LH_DefaultDomain_0&hash=item486fde546e or? And this card is 2x10gbit full duplex? if so, possible lacp @ 2x10gbit?
-
What fiber sfp+ modules are you guys using for the intel x520?
I use mostly Direct Attach cables, not optics, but when I do use fiber it's either genuine Intel modules or knockoffs from approvedoptics.com.
Nice, regular 8431 cables like http://www.ebay.com/itm/Intel-XDACBL5M-Twinaxial-Netwrk-Cable-Twinaxial-Netwrk-16-40ft-SFF-8431-Male-/311114486894?pt=LH_DefaultDomain_0&hash=item486fde546e or? And this card is 2x10gbit full duplex? if so, possible lacp @ 2x10gbit?
Yes, cables like that.
Theoretically, yes, you could put two links together for 2x10Gbit/s.
-
What fiber sfp+ modules are you guys using for the intel x520?
I use mostly Direct Attach cables, not optics, but when I do use fiber it's either genuine Intel modules or knockoffs from approvedoptics.com.
Nice, regular 8431 cables like http://www.ebay.com/itm/Intel-XDACBL5M-Twinaxial-Netwrk-Cable-Twinaxial-Netwrk-16-40ft-SFF-8431-Male-/311114486894?pt=LH_DefaultDomain_0&hash=item486fde546e or? And this card is 2x10gbit full duplex? if so, possible lacp @ 2x10gbit?
Yes, cables like that.
Theoretically, yes, you could put two links together for 2x10Gbit/s.
Now do the math on 40Gbps NICs and a $99 break-out cable. :-)
-
on my actual setup - J1900 + ssd + 8gb ram and 2xIntel i210T
For PPPoE i got verry low speed. Around 346.96 Mbit/s. connection is 1000 Mbit/s.I'm not expecting to get 1000 but @least 50%
dev.igb.0.%desc: Intel(R) PRO/1000 Network Connection version - 2.4.0 -
What CPU usage are you seeing?
Steve
-
-
How does that divide across the cores? I imagine you have one core at 100%
Steve
-
one core is reaching 100% yeah…rest of them are low in load.
-
one core is reaching 100% yeah…rest of them are low in load.
Are you able to adjust this over the PowerD function perhaps?
-
According to what I have found on the forum this is meant to be "adaptive"
Found that we can change the parameters by editing /etc/inc/system.Inc
Or în gui via system - advanced - miscellaneous. -
Or în gui via system - advanced - miscellaneous.
Yes this I was meaning, by an Alix APU we where seeing the throughout gaining
from ~450 MBit/s to 650 - 750 MBit/s only by activating or changing the PowerD options. -
Sounds pretty nice !
-
Nikkon, this fits your use case: https://redmine.pfsense.org/issues/4821
Steve
-
Thx Steve ;)
-
It explains the low throughput in one direction but there's no solution to it as yet. Havne't tried the suggested patch but we are at least aware of it now.
Steve
-
Waiting for it ;)
Thx for the update. -
as an update on all my systems with igb i got
sysctl -a | grep '.igb..*x_pack'
dev.igb.0.queue0.tx_packets: 38931223
dev.igb.0.queue0.rx_packets: 42548203
dev.igb.1.queue0.tx_packets: 39439021
dev.igb.1.queue0.rx_packets: 36697705 -
I found another thread pointed to this circumstance where this would be really nice explained.
mbufs tunable
Also have a dedicated view on the named pfSense versions please. -
Nikkon, you only have one queue per NIC in each direction? You have a 4 core CPU I expect to see at least 2 queues. Do you have a tunable set to limit that?
Steve
-
dev.igb.0.queue0.no_desc_avail: 0
dev.igb.0.queue0.tx_packets: 7824580
dev.igb.0.queue0.rx_packets: 9484446
dev.igb.0.queue0.rx_bytes: 8615598781
dev.igb.0.queue0.lro_queued: 0
dev.igb.0.queue0.lro_flushed: 0–-------
dev.igb.1.queue0.no_desc_avail: 0
dev.igb.1.queue0.tx_packets: 9365166
dev.igb.1.queue0.rx_packets: 7891338
dev.igb.1.queue0.rx_bytes: 5772762364
dev.igb.1.queue0.lro_queued: 0
dev.igb.1.queue0.lro_flushed: 0I belive i need to add more.
Is there a recommanded value?