Issue with throughput with X710-DA2 and PFsense 2.6
-
I've been researching this now for about 2 weeks and finally resolved to the fact I am going to have to create a post on the issue.
Hardware:
Supermicro C2758 / 16GB RAM 512 / 256GB SSD / Intel X710-DA2 dual 10GB SFP+ adapterInternet Service: 1gb/50mb
PFsense WAN: ixl1 (Connected at 1Gbit/s)
PFsense LAN: ixl0
Hardware Checksum Offloading: Disabled
Hardware TCP Segmentation Offloading: Disabled
Hardware Large Receive Offloading: Disabled
hw.ixl.flow_control="3" (in loader.conf.local)
Codel/FQ_Codel: Enabled (These Settings)
Suricata: Installed and enabled (But no change if disabled/uninstalled)
4 VLANs on LAN interface
Speedtest Installed on PFsenseWhen I run a speed test directly on the PFsense box, I get results ranging from 250Mbit/47Mbit to 300Mbit/47Mbit. No matter the time of day, etc.
CPU package utilization <35% with Suricata enabled and running during testing
Ram Utilization < 10%Same results are seen when ran from any client connected via ethernet cable to the LAN also.
But I can put my Ubiquiti Dream Machine Pro directly in it's place and see 940Mbit/47Mbit.
Is there some other magic setting I'm missing to get this to be closer to 940/47?
-
The first thing I would try, if you have not already is to disable flow control.
Try running an iperf test between a LAN side client and pfSense directly. Or between and iperf client and server on different LAN side VLANs. Make sure you can pass 1G on the LAN side.
Steve
-
@stephenw10 Sorry I should have added that I also tried with Flow Control disabled and it didn't make any difference when testing from directly from the PFsense box or a LAN connected client.
-
@stephenw10 When running iperf from a physical machine attached to the LAN, to the PFsense box and using multiple streams (only can achieve 600+Mbps on a single stream), I'm able to achieve 1.1Gbps transfer from the physical host and the PFsense box.
-
Is that from a client connected at 10G?
Try running at the command line:
top -HaSP
Whilst running a test how does the per core CPU loading look?
That NIC should have multiple queues and be able to use multiple CPU cores if the test opens several connections. iperf deliberately doesn't but speedtest does.
The other thing to check is the CPU frequency. On some hardware, like our own SG-8860, you need to have powerd enabled otherwise the CPU runs at it's lowest speed.
[2.6.0-RELEASE][admin@t70.stevew.lan]/root: sysctl dev.cpu.0.freq dev.cpu.0.freq: 560
Steve
-
@stephenw10 said in Issue with throughput with X710-DA2 and PFsense 2.6:
Is that from a client connected at 10G?
No, its connected at 1Gbit, the SFP module I have on the LAN at the moment is a 1Gbit Ethernet Module till my Intel compatible 10G fiber module module arrives tomorrow.
Try running at the command line: top -HaSP
Whilst running a test how does the per core CPU loading look?
That NIC should have multiple queues and be able to use multiple CPU cores if the test opens several connections. iperf deliberately doesn't but speedtest does.When running iperf3 in a single stream, Lan connected client to PFsense, 1 core maxes at 100%. When running a normal speed test from a client from the internet, all cores come under load up to about 20%
The other thing to check is the CPU frequency. On some hardware, like our own SG-8860, you need to have powerd enabled otherwise the CPU runs at it's lowest speed.
PowerD was already enabled, and the frequency is being reported as 2400Mhz, which is the base frequency of the chip.
-
@bawoodruff said in Issue with throughput with X710-DA2 and PFsense 2.6:
When running iperf3 in a single stream, Lan connected client to PFsense, 1 core maxes at 100%.
Ok, so in that case the limit you're hitting is almost certainly the iperf process itself running on the firewall.
@bawoodruff said in Issue with throughput with X710-DA2 and PFsense 2.6:
I'm able to achieve 1.1Gbps transfer from the physical host and the PFsense box.
I'm not sure how you managed >1Gbps on a 1G link unless you ran it bidir and summed the results?
I imagine that box has other NICs? The C2758 has 4 igb NICs on the SoC which are usually used. Can you test using those instead?
Steve
-
Here are the sysctl outputs for hw.ixl and dev.ixl.1
sysctl hw.ixl
hw.ixl.flow_control: 0 hw.ixl.tx_itr: 122 hw.ixl.rx_itr: 62 hw.ixl.shared_debug_mask: 0 hw.ixl.core_debug_mask: 0 hw.ixl.enable_head_writeback: 1 hw.ixl.enable_vf_loopback: 1 hw.ixl.i2c_access_method: 0
sysctl dev.ixl.0
dev.ixl.1.mac.xoff_recvd: 0 dev.ixl.1.mac.xoff_txd: 0 dev.ixl.1.mac.xon_recvd: 0 dev.ixl.1.mac.xon_txd: 0 dev.ixl.1.mac.tx_frames_big: 0 dev.ixl.1.mac.tx_frames_1024_1522: 378916 dev.ixl.1.mac.tx_frames_512_1023: 6193 dev.ixl.1.mac.tx_frames_256_511: 3632 dev.ixl.1.mac.tx_frames_128_255: 20274 dev.ixl.1.mac.tx_frames_65_127: 831542 dev.ixl.1.mac.tx_frames_64: 86690 dev.ixl.1.mac.checksum_errors: 0 dev.ixl.1.mac.rx_jabber: 0 dev.ixl.1.mac.rx_oversized: 0 dev.ixl.1.mac.rx_fragmented: 0 dev.ixl.1.mac.rx_undersize: 0 dev.ixl.1.mac.rx_frames_big: 0 dev.ixl.1.mac.rx_frames_1024_1522: 1430387 dev.ixl.1.mac.rx_frames_512_1023: 13770 dev.ixl.1.mac.rx_frames_256_511: 11445 dev.ixl.1.mac.rx_frames_128_255: 20127 dev.ixl.1.mac.rx_frames_65_127: 179894 dev.ixl.1.mac.rx_frames_64: 155235 dev.ixl.1.mac.rx_length_errors: 0 dev.ixl.1.mac.remote_faults: 0 dev.ixl.1.mac.local_faults: 1 dev.ixl.1.mac.illegal_bytes: 0 dev.ixl.1.mac.crc_errors: 0 dev.ixl.1.mac.bcast_pkts_txd: 3 dev.ixl.1.mac.mcast_pkts_txd: 51 dev.ixl.1.mac.ucast_pkts_txd: 1327193 dev.ixl.1.mac.good_octets_txd: 653008038 dev.ixl.1.mac.rx_discards: 0 dev.ixl.1.mac.bcast_pkts_rcvd: 104495 dev.ixl.1.mac.mcast_pkts_rcvd: 222 dev.ixl.1.mac.ucast_pkts_rcvd: 1706141 dev.ixl.1.mac.good_octets_rcvd: 2207151224 dev.ixl.1.pf.txq07.itr: 122 dev.ixl.1.pf.txq07.bytes: 85865860 dev.ixl.1.pf.txq07.packets: 160951 dev.ixl.1.pf.txq07.mss_too_small: 0 dev.ixl.1.pf.txq07.tso: 0 dev.ixl.1.pf.txq06.itr: 122 dev.ixl.1.pf.txq06.bytes: 32231199 dev.ixl.1.pf.txq06.packets: 133979 dev.ixl.1.pf.txq06.mss_too_small: 0 dev.ixl.1.pf.txq06.tso: 0 dev.ixl.1.pf.txq05.itr: 122 dev.ixl.1.pf.txq05.bytes: 72052507 dev.ixl.1.pf.txq05.packets: 123085 dev.ixl.1.pf.txq05.mss_too_small: 0 dev.ixl.1.pf.txq05.tso: 0 dev.ixl.1.pf.txq04.itr: 122 dev.ixl.1.pf.txq04.bytes: 96555685 dev.ixl.1.pf.txq04.packets: 184606 dev.ixl.1.pf.txq04.mss_too_small: 0 dev.ixl.1.pf.txq04.tso: 0 dev.ixl.1.pf.txq03.itr: 122 dev.ixl.1.pf.txq03.bytes: 131018464 dev.ixl.1.pf.txq03.packets: 184573 dev.ixl.1.pf.txq03.mss_too_small: 0 dev.ixl.1.pf.txq03.tso: 0 dev.ixl.1.pf.txq02.itr: 122 dev.ixl.1.pf.txq02.bytes: 57633132 dev.ixl.1.pf.txq02.packets: 250195 dev.ixl.1.pf.txq02.mss_too_small: 0 dev.ixl.1.pf.txq02.tso: 0 dev.ixl.1.pf.txq01.itr: 122 dev.ixl.1.pf.txq01.bytes: 64624978 dev.ixl.1.pf.txq01.packets: 123393 dev.ixl.1.pf.txq01.mss_too_small: 0 dev.ixl.1.pf.txq01.tso: 0 dev.ixl.1.pf.txq00.itr: 122 dev.ixl.1.pf.txq00.bytes: 107176209 dev.ixl.1.pf.txq00.packets: 166446 dev.ixl.1.pf.txq00.mss_too_small: 0 dev.ixl.1.pf.txq00.tso: 0 dev.ixl.1.pf.rxq07.itr: 62 dev.ixl.1.pf.rxq07.desc_err: 0 dev.ixl.1.pf.rxq07.bytes: 238482036 dev.ixl.1.pf.rxq07.packets: 181784 dev.ixl.1.pf.rxq07.irqs: 145122 dev.ixl.1.pf.rxq06.itr: 62 dev.ixl.1.pf.rxq06.desc_err: 0 dev.ixl.1.pf.rxq06.bytes: 256000391 dev.ixl.1.pf.rxq06.packets: 204750 dev.ixl.1.pf.rxq06.irqs: 145232 dev.ixl.1.pf.rxq05.itr: 62 dev.ixl.1.pf.rxq05.desc_err: 0 dev.ixl.1.pf.rxq05.bytes: 384950723 dev.ixl.1.pf.rxq05.packets: 278571 dev.ixl.1.pf.rxq05.irqs: 163196 dev.ixl.1.pf.rxq04.itr: 62 dev.ixl.1.pf.rxq04.desc_err: 0 dev.ixl.1.pf.rxq04.bytes: 127583125 dev.ixl.1.pf.rxq04.packets: 111013 dev.ixl.1.pf.rxq04.irqs: 138156 dev.ixl.1.pf.rxq03.itr: 62 dev.ixl.1.pf.rxq03.desc_err: 0 dev.ixl.1.pf.rxq03.bytes: 290079936 dev.ixl.1.pf.rxq03.packets: 211873 dev.ixl.1.pf.rxq03.irqs: 154015 dev.ixl.1.pf.rxq02.itr: 62 dev.ixl.1.pf.rxq02.desc_err: 0 dev.ixl.1.pf.rxq02.bytes: 219499771 dev.ixl.1.pf.rxq02.packets: 171285 dev.ixl.1.pf.rxq02.irqs: 162955 dev.ixl.1.pf.rxq01.itr: 62 dev.ixl.1.pf.rxq01.desc_err: 0 dev.ixl.1.pf.rxq01.bytes: 367529376 dev.ixl.1.pf.rxq01.packets: 308351 dev.ixl.1.pf.rxq01.irqs: 184942 dev.ixl.1.pf.rxq00.itr: 62 dev.ixl.1.pf.rxq00.desc_err: 0 dev.ixl.1.pf.rxq00.bytes: 315621563 dev.ixl.1.pf.rxq00.packets: 341267 dev.ixl.1.pf.rxq00.irqs: 278260 dev.ixl.1.pf.rx_errors: 0 dev.ixl.1.pf.bcast_pkts_txd: 3 dev.ixl.1.pf.mcast_pkts_txd: 281474976710651 dev.ixl.1.pf.ucast_pkts_txd: 1327193 dev.ixl.1.pf.good_octets_txd: 647144782 dev.ixl.1.pf.rx_discards: 4294962076 dev.ixl.1.pf.bcast_pkts_rcvd: 104497 dev.ixl.1.pf.mcast_pkts_rcvd: 281474976710644 dev.ixl.1.pf.ucast_pkts_rcvd: 1706603 dev.ixl.1.pf.good_octets_rcvd: 2207182593 dev.ixl.1.admin_irq: 4 dev.ixl.1.link_active_on_if_down: 1 dev.ixl.1.eee.rx_lpi_count: 0 dev.ixl.1.eee.tx_lpi_count: 0 dev.ixl.1.eee.rx_lpi_status: 0 dev.ixl.1.eee.tx_lpi_status: 0 dev.ixl.1.eee.enable: 0 dev.ixl.1.fw_lldp: 1 dev.ixl.1.dynamic_tx_itr: 0 dev.ixl.1.dynamic_rx_itr: 0 dev.ixl.1.rx_itr: 62 dev.ixl.1.tx_itr: 122 dev.ixl.1.unallocated_queues: 760 dev.ixl.1.fw_version: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002899 oem 17.4352.12 dev.ixl.1.current_speed: 1 Gbps dev.ixl.1.supported_speeds: 6 dev.ixl.1.advertise_speed: 6 dev.ixl.1.fc: 0 dev.ixl.1.iflib.rxq7.rxq_fl0.buf_size: 2048 dev.ixl.1.iflib.rxq7.rxq_fl0.credits: 1023 dev.ixl.1.iflib.rxq7.rxq_fl0.cidx: 536 dev.ixl.1.iflib.rxq7.rxq_fl0.pidx: 535 dev.ixl.1.iflib.rxq7.cpu: 7 dev.ixl.1.iflib.rxq6.rxq_fl0.buf_size: 2048 dev.ixl.1.iflib.rxq6.rxq_fl0.credits: 1023 dev.ixl.1.iflib.rxq6.rxq_fl0.cidx: 974 dev.ixl.1.iflib.rxq6.rxq_fl0.pidx: 973 dev.ixl.1.iflib.rxq6.cpu: 6 dev.ixl.1.iflib.rxq5.rxq_fl0.buf_size: 2048 dev.ixl.1.iflib.rxq5.rxq_fl0.credits: 1023 dev.ixl.1.iflib.rxq5.rxq_fl0.cidx: 43 dev.ixl.1.iflib.rxq5.rxq_fl0.pidx: 42 dev.ixl.1.iflib.rxq5.cpu: 5 dev.ixl.1.iflib.rxq4.rxq_fl0.buf_size: 2048 dev.ixl.1.iflib.rxq4.rxq_fl0.credits: 1023 dev.ixl.1.iflib.rxq4.rxq_fl0.cidx: 421 dev.ixl.1.iflib.rxq4.rxq_fl0.pidx: 420 dev.ixl.1.iflib.rxq4.cpu: 4 dev.ixl.1.iflib.rxq3.rxq_fl0.buf_size: 2048 dev.ixl.1.iflib.rxq3.rxq_fl0.credits: 1023 dev.ixl.1.iflib.rxq3.rxq_fl0.cidx: 929 dev.ixl.1.iflib.rxq3.rxq_fl0.pidx: 928 dev.ixl.1.iflib.rxq3.cpu: 3 dev.ixl.1.iflib.rxq2.rxq_fl0.buf_size: 2048 dev.ixl.1.iflib.rxq2.rxq_fl0.credits: 1023 dev.ixl.1.iflib.rxq2.rxq_fl0.cidx: 277 dev.ixl.1.iflib.rxq2.rxq_fl0.pidx: 276 dev.ixl.1.iflib.rxq2.cpu: 2 dev.ixl.1.iflib.rxq1.rxq_fl0.buf_size: 2048 dev.ixl.1.iflib.rxq1.rxq_fl0.credits: 1023 dev.ixl.1.iflib.rxq1.rxq_fl0.cidx: 127 dev.ixl.1.iflib.rxq1.rxq_fl0.pidx: 126 dev.ixl.1.iflib.rxq1.cpu: 1 dev.ixl.1.iflib.rxq0.rxq_fl0.buf_size: 2048 dev.ixl.1.iflib.rxq0.rxq_fl0.credits: 1023 dev.ixl.1.iflib.rxq0.rxq_fl0.cidx: 275 dev.ixl.1.iflib.rxq0.rxq_fl0.pidx: 274 dev.ixl.1.iflib.rxq0.cpu: 0 dev.ixl.1.iflib.txq7.r_abdications: 0 dev.ixl.1.iflib.txq7.r_restarts: 0 dev.ixl.1.iflib.txq7.r_stalls: 0 dev.ixl.1.iflib.txq7.r_starts: 160945 dev.ixl.1.iflib.txq7.r_drops: 0 dev.ixl.1.iflib.txq7.r_enqueues: 160955 dev.ixl.1.iflib.txq7.ring_state: pidx_head: 1211 pidx_tail: 1211 cidx: 1211 state: IDLE dev.ixl.1.iflib.txq7.txq_cleaned: 191596 dev.ixl.1.iflib.txq7.txq_processed: 191604 dev.ixl.1.iflib.txq7.txq_in_use: 8 dev.ixl.1.iflib.txq7.txq_cidx_processed: 116 dev.ixl.1.iflib.txq7.txq_cidx: 108 dev.ixl.1.iflib.txq7.txq_pidx: 116 dev.ixl.1.iflib.txq7.no_tx_dma_setup: 0 dev.ixl.1.iflib.txq7.txd_encap_efbig: 0 dev.ixl.1.iflib.txq7.tx_map_failed: 0 dev.ixl.1.iflib.txq7.no_desc_avail: 0 dev.ixl.1.iflib.txq7.mbuf_defrag_failed: 0 dev.ixl.1.iflib.txq7.m_pullups: 0 dev.ixl.1.iflib.txq7.mbuf_defrag: 0 dev.ixl.1.iflib.txq7.cpu: 7 dev.ixl.1.iflib.txq6.r_abdications: 0 dev.ixl.1.iflib.txq6.r_restarts: 0 dev.ixl.1.iflib.txq6.r_stalls: 0 dev.ixl.1.iflib.txq6.r_starts: 133977 dev.ixl.1.iflib.txq6.r_drops: 0 dev.ixl.1.iflib.txq6.r_enqueues: 133982 dev.ixl.1.iflib.txq6.ring_state: pidx_head: 0862 pidx_tail: 0862 cidx: 0862 state: IDLE dev.ixl.1.iflib.txq6.txq_cleaned: 147107 dev.ixl.1.iflib.txq6.txq_processed: 147115 dev.ixl.1.iflib.txq6.txq_in_use: 8 dev.ixl.1.iflib.txq6.txq_cidx_processed: 683 dev.ixl.1.iflib.txq6.txq_cidx: 675 dev.ixl.1.iflib.txq6.txq_pidx: 683 dev.ixl.1.iflib.txq6.no_tx_dma_setup: 0 dev.ixl.1.iflib.txq6.txd_encap_efbig: 0 dev.ixl.1.iflib.txq6.tx_map_failed: 0 dev.ixl.1.iflib.txq6.no_desc_avail: 0 dev.ixl.1.iflib.txq6.mbuf_defrag_failed: 0 dev.ixl.1.iflib.txq6.m_pullups: 0 dev.ixl.1.iflib.txq6.mbuf_defrag: 0 dev.ixl.1.iflib.txq6.cpu: 6 dev.ixl.1.iflib.txq5.r_abdications: 0 dev.ixl.1.iflib.txq5.r_restarts: 0 dev.ixl.1.iflib.txq5.r_stalls: 0 dev.ixl.1.iflib.txq5.r_starts: 123086 dev.ixl.1.iflib.txq5.r_drops: 0 dev.ixl.1.iflib.txq5.r_enqueues: 123086 dev.ixl.1.iflib.txq5.ring_state: pidx_head: 0206 pidx_tail: 0206 cidx: 0206 state: IDLE dev.ixl.1.iflib.txq5.txq_cleaned: 147867 dev.ixl.1.iflib.txq5.txq_processed: 147875 dev.ixl.1.iflib.txq5.txq_in_use: 8 dev.ixl.1.iflib.txq5.txq_cidx_processed: 419 dev.ixl.1.iflib.txq5.txq_cidx: 411 dev.ixl.1.iflib.txq5.txq_pidx: 419 dev.ixl.1.iflib.txq5.no_tx_dma_setup: 0 dev.ixl.1.iflib.txq5.txd_encap_efbig: 0 dev.ixl.1.iflib.txq5.tx_map_failed: 0 dev.ixl.1.iflib.txq5.no_desc_avail: 0 dev.ixl.1.iflib.txq5.mbuf_defrag_failed: 0 dev.ixl.1.iflib.txq5.m_pullups: 0 dev.ixl.1.iflib.txq5.mbuf_defrag: 0 dev.ixl.1.iflib.txq5.cpu: 5 dev.ixl.1.iflib.txq4.r_abdications: 0 dev.ixl.1.iflib.txq4.r_restarts: 0 dev.ixl.1.iflib.txq4.r_stalls: 0 dev.ixl.1.iflib.txq4.r_starts: 184606 dev.ixl.1.iflib.txq4.r_drops: 0 dev.ixl.1.iflib.txq4.r_enqueues: 184608 dev.ixl.1.iflib.txq4.ring_state: pidx_head: 0288 pidx_tail: 0288 cidx: 0288 state: IDLE dev.ixl.1.iflib.txq4.txq_cleaned: 198488 dev.ixl.1.iflib.txq4.txq_processed: 198496 dev.ixl.1.iflib.txq4.txq_in_use: 8 dev.ixl.1.iflib.txq4.txq_cidx_processed: 864 dev.ixl.1.iflib.txq4.txq_cidx: 856 dev.ixl.1.iflib.txq4.txq_pidx: 864 dev.ixl.1.iflib.txq4.no_tx_dma_setup: 0 dev.ixl.1.iflib.txq4.txd_encap_efbig: 0 dev.ixl.1.iflib.txq4.tx_map_failed: 0 dev.ixl.1.iflib.txq4.no_desc_avail: 0 dev.ixl.1.iflib.txq4.mbuf_defrag_failed: 0 dev.ixl.1.iflib.txq4.m_pullups: 0 dev.ixl.1.iflib.txq4.mbuf_defrag: 0 dev.ixl.1.iflib.txq4.cpu: 4 dev.ixl.1.iflib.txq3.r_abdications: 0 dev.ixl.1.iflib.txq3.r_restarts: 0 dev.ixl.1.iflib.txq3.r_stalls: 0 dev.ixl.1.iflib.txq3.r_starts: 184566 dev.ixl.1.iflib.txq3.r_drops: 0 dev.ixl.1.iflib.txq3.r_enqueues: 184573 dev.ixl.1.iflib.txq3.ring_state: pidx_head: 0253 pidx_tail: 0253 cidx: 0253 state: IDLE dev.ixl.1.iflib.txq3.txq_cleaned: 202062 dev.ixl.1.iflib.txq3.txq_processed: 202070 dev.ixl.1.iflib.txq3.txq_in_use: 8 dev.ixl.1.iflib.txq3.txq_cidx_processed: 342 dev.ixl.1.iflib.txq3.txq_cidx: 334 dev.ixl.1.iflib.txq3.txq_pidx: 342 dev.ixl.1.iflib.txq3.no_tx_dma_setup: 0 dev.ixl.1.iflib.txq3.txd_encap_efbig: 0 dev.ixl.1.iflib.txq3.tx_map_failed: 0 dev.ixl.1.iflib.txq3.no_desc_avail: 0 dev.ixl.1.iflib.txq3.mbuf_defrag_failed: 0 dev.ixl.1.iflib.txq3.m_pullups: 0 dev.ixl.1.iflib.txq3.mbuf_defrag: 0 dev.ixl.1.iflib.txq3.cpu: 3 dev.ixl.1.iflib.txq2.r_abdications: 0 dev.ixl.1.iflib.txq2.r_restarts: 0 dev.ixl.1.iflib.txq2.r_stalls: 0 dev.ixl.1.iflib.txq2.r_starts: 250190 dev.ixl.1.iflib.txq2.r_drops: 0 dev.ixl.1.iflib.txq2.r_enqueues: 250196 dev.ixl.1.iflib.txq2.ring_state: pidx_head: 0340 pidx_tail: 0340 cidx: 0340 state: IDLE dev.ixl.1.iflib.txq2.txq_cleaned: 256287 dev.ixl.1.iflib.txq2.txq_processed: 256295 dev.ixl.1.iflib.txq2.txq_in_use: 8 dev.ixl.1.iflib.txq2.txq_cidx_processed: 295 dev.ixl.1.iflib.txq2.txq_cidx: 287 dev.ixl.1.iflib.txq2.txq_pidx: 295 dev.ixl.1.iflib.txq2.no_tx_dma_setup: 0 dev.ixl.1.iflib.txq2.txd_encap_efbig: 0 dev.ixl.1.iflib.txq2.tx_map_failed: 0 dev.ixl.1.iflib.txq2.no_desc_avail: 0 dev.ixl.1.iflib.txq2.mbuf_defrag_failed: 0 dev.ixl.1.iflib.txq2.m_pullups: 0 dev.ixl.1.iflib.txq2.mbuf_defrag: 0 dev.ixl.1.iflib.txq2.cpu: 2 dev.ixl.1.iflib.txq1.r_abdications: 0 dev.ixl.1.iflib.txq1.r_restarts: 0 dev.ixl.1.iflib.txq1.r_stalls: 0 dev.ixl.1.iflib.txq1.r_starts: 123390 dev.ixl.1.iflib.txq1.r_drops: 0 dev.ixl.1.iflib.txq1.r_enqueues: 123393 dev.ixl.1.iflib.txq1.ring_state: pidx_head: 0513 pidx_tail: 0513 cidx: 0513 state: IDLE dev.ixl.1.iflib.txq1.txq_cleaned: 145295 dev.ixl.1.iflib.txq1.txq_processed: 145303 dev.ixl.1.iflib.txq1.txq_in_use: 8 dev.ixl.1.iflib.txq1.txq_cidx_processed: 919 dev.ixl.1.iflib.txq1.txq_cidx: 911 dev.ixl.1.iflib.txq1.txq_pidx: 919 dev.ixl.1.iflib.txq1.no_tx_dma_setup: 0 dev.ixl.1.iflib.txq1.txd_encap_efbig: 0 dev.ixl.1.iflib.txq1.tx_map_failed: 0 dev.ixl.1.iflib.txq1.no_desc_avail: 0 dev.ixl.1.iflib.txq1.mbuf_defrag_failed: 0 dev.ixl.1.iflib.txq1.m_pullups: 0 dev.ixl.1.iflib.txq1.mbuf_defrag: 0 dev.ixl.1.iflib.txq1.cpu: 1 dev.ixl.1.iflib.txq0.r_abdications: 0 dev.ixl.1.iflib.txq0.r_restarts: 0 dev.ixl.1.iflib.txq0.r_stalls: 0 dev.ixl.1.iflib.txq0.r_starts: 166433 dev.ixl.1.iflib.txq0.r_drops: 0 dev.ixl.1.iflib.txq0.r_enqueues: 166447 dev.ixl.1.iflib.txq0.ring_state: pidx_head: 0559 pidx_tail: 0559 cidx: 0559 state: IDLE dev.ixl.1.iflib.txq0.txq_cleaned: 180055 dev.ixl.1.iflib.txq0.txq_processed: 180063 dev.ixl.1.iflib.txq0.txq_in_use: 8 dev.ixl.1.iflib.txq0.txq_cidx_processed: 863 dev.ixl.1.iflib.txq0.txq_cidx: 855 dev.ixl.1.iflib.txq0.txq_pidx: 863 dev.ixl.1.iflib.txq0.no_tx_dma_setup: 0 dev.ixl.1.iflib.txq0.txd_encap_efbig: 0 dev.ixl.1.iflib.txq0.tx_map_failed: 0 dev.ixl.1.iflib.txq0.no_desc_avail: 0 dev.ixl.1.iflib.txq0.mbuf_defrag_failed: 0 dev.ixl.1.iflib.txq0.m_pullups: 0 dev.ixl.1.iflib.txq0.mbuf_defrag: 0 dev.ixl.1.iflib.txq0.cpu: 0 dev.ixl.1.iflib.override_nrxds: 0 dev.ixl.1.iflib.override_ntxds: 0 dev.ixl.1.iflib.use_logical_cores: 0 dev.ixl.1.iflib.separate_txrx: 0 dev.ixl.1.iflib.core_offset: 0 dev.ixl.1.iflib.tx_abdicate: 0 dev.ixl.1.iflib.rx_budget: 0 dev.ixl.1.iflib.disable_msix: 0 dev.ixl.1.iflib.override_qs_enable: 0 dev.ixl.1.iflib.override_nrxqs: 0 dev.ixl.1.iflib.override_ntxqs: 0 dev.ixl.1.iflib.driver_version: 2.3.1-k dev.ixl.1.%parent: pci4 dev.ixl.1.%pnpinfo: vendor=0x8086 device=0x1572 subvendor=0x8086 subdevice=0x0000 class=0x020000 dev.ixl.1.%location: slot=0 function=1 dbsf=pci0:4:0:1 dev.ixl.1.%driver: ixl dev.ixl.1.%desc: Intel(R) Ethernet Controller X710 for 10GbE SFP+ - 2.3.1-k
-
Correct, that was the summed results from iperf of the simultaneous streams.
Also correct, this box has four igb interfaces. but the point in using this card is that I will be switching ISP's in the near future to which Im going to be using a fiber module for straight from the ONT. In the meantime, I'm trying to get this machine to use that same card with a 1gb ethernet module to save on having to recreate the wheel later when I do switch as far as the majority of the configurations. But I'm not even able to get the 900+Mbps I'm able to get with the UDMP. TBH this is the second card I've purchased, I also tried this with a Chelsio T520-CR with different SFP modules and saw the same results. -
Well I would confirm you're able to pass 1G with the igb NICs before going further. That should be no problem for that device. Our own firewalls using that CPU could pass it easily.
The C2758 with a Chelsio NIC was good for somewhere in the 3-4Gbps range.Steve
-
WAN set up on igb2 instead of the ixl interface
iperf LAN client -> PFsense same result
Speedtest from PFsense to internet no change also.
Processor utilization on both are the same as earlier also. -
So that was replacing the LAN interface? Are you able to test igb as WAN?
-
No, that was replacing the WAN Interface
-
Hmm, well it could be the LAN side. Try it with igb as both.
You might also try testing against a local iperf server on the WAN side if you can to rule out anything upstream.
Steve
-
If I'm ssh'ed into the PFsense box, and running speedtest-cli, and only getting 300-ish download speeds how would that involve the LAN adapter? Outside of being able to SSH into the PFsense box that is.
-
It doesn't but speedtest-cli is not an infallible tool, especially at higher speeds. An iperf test would be much better but really should use a local server. It's rare to find a public iperf server that can push 1G in my experience. Near me at least.
Steve
-
My 10Gb SFP+ fiber modules I bought for the ixl interface came in today (weren't supposed to be in till tomorrow). I switched the LAN interface out with the Gbit SFP module that was in there and I'm able to get about 4.5Gbps between the LAN Client and PFsense using multiple streams running:
iperf3 -c 192.168.1.1 -P 20
but still only getting 300ish when I try a normal speed test from PFsense to the internet or the LAN Client to the internet.WAN: igb 1Gbit interface
LAN: ixl 10Gbit interfaceClient running iperf3 -> 10Gb LAN->PFsense (iperf3 server) = 4.5ish Gbps
ssh -> PFsense -> Speedtest-cli -> WAN -> Internet = 300ish Mb/s
client running speedtest-> 10Gb LAN -> PFsense -> WAN -> Internet = 300ish Mb/sThe 300Mbps speed was basically the same whether the WAN was on a igb 1Gbit interface or the other ixl interface with a 1Gbit SFP module
I've tried multiple speed test sites from the client machine during testing and they all show basically the same speed.
-
Hmm, how is the WAN actually connected here?
Do you see any collisions or errors on the interfaces?
We have seen ISPs that provide a reduced rate profile to devices other than what is registered, which could still be the UDM. Have you tried spoofing the MAC address on WAN to match the UDM?
Steve
-
Cable Co. ->Docsis3.1 Modem ->Cat6->igb Interface on PFsense box
ISP Connection is regulated by MAC address filtering of the Modem itself.
I actually already tried cloning the UDMP's MAC on the PFsense box. I saw no change in speed. Just grabbed the IP address last assigned to the UDMP.
-
And can I assume a client connected directly to the modem gets a public IP and can see the full bandwidth?