Intel NIC I-226V
-
@Antibiotic said in Intel NIC I-226V:
@bmeeks This one got from command:
igc0@pci0:1:0:0: class=0x020000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x 125c subvendor=0x8086 subdevice=0x0000
vendor = 'Intel Corporation'
device = 'Ethernet Controller I226-V'
class = network
subclass = ethernetCan you please tell me, where can check by this info NIC support options?
Here is where the driver for that hardware family was introduced into FreeBSD and thus pfSense (both the i225 and i226 are the same NIC family): https://cgit.freebsd.org/src/commit/?id=d7388d33b4dd. If you look through the
git diff
you can find the man page showing the supported tunables. -
You can see what the driver/hardware is reporting as capable like:
[24.03-BETA][admin@4200.stevew.lan]/root: ifconfig -vvm igc0 igc0: flags=1008843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST,LOWER_UP> metric 0 mtu 1500 options=48020b8<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,WOL_MAGIC,HWSTATS,MEXTPG> capabilities=4f43fbb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWTSO,NETMAP,RXCSUM_IPV6,TXCSUM_IPV6,HWSTATS,MEXTPG> ether 00:08:a2:12:ec:d4 media: Ethernet autoselect status: no carrier supported media: media autoselect media 2500Base-T media 1000baseT media 1000baseT mediaopt full-duplex media 100baseTX mediaopt full-duplex media 100baseTX media 10baseT/UTP mediaopt full-duplex media 10baseT/UTP nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL> drivername: igc0
That is the same NIC in the 4200:
[24.03-BETA][admin@4200.stevew.lan]/root: pciconf -lv igc0 igc0@pci0:25:0:0: class=0x020000 rev=0x04 hdr=0x00 vendor=0x8086 device=0x125c subvendor=0x8086 subdevice=0x0000 vendor = 'Intel Corporation' device = 'Ethernet Controller I226-V' class = network subclass = ethernet
-
@stephenw10 Ok, thank's
-
@stephenw10 But actually, is it normally in case of working suricata, crowdsec and traffic shaping limiters (up to 95% of max capacity in Limiters) my ISP speed going down to 60%/70% of max capacity ( 1GB Up/Down)?
-
No.
However what is seemingly quite common with those N100/N200 CPUs is the speed/power values passed by the BIOS result in slow running. Check the CPU frequency reported during the test. -
@stephenw10 during speed test result:
Intel(R) N100
Current: 2922 MHz, Max: 806 MHz
4 CPUs: 1 package(s) x 4 core(s)
AES-NI CPU Crypto: Yes (active) -
@stephenw10 Power Savings - Intel Speed Shift
Speed ShiftEnable Speed ShiftCurrent State: Active
Control Level
Core Level Control (Recommended)Core-level control is the best practice in most cases, especially for hardware with only a single physical CPU. Changing this setting requires a reboot.
Current Active Level: Core
Power PreferencePerformanceEnergy Efficiency
50 -
Mmm, OK. That seems good.
I'd also try runingtop -HaSP
whist testing to see the per core CPU load at the time. -
@stephenw10 Power Savings - PowerD
PowerD Disabled
AC Power
Hiadaptive
Battery Power
Hiadaptive
Unknown Power
Hiadaptive -
@stephenw10 last pid: 68860; load averages: 1.30, 1.05, 0.95 up 0+02:02:23 16:41:57
387 threads: 5 running, 358 sleeping, 24 waiting
CPU 0: 13.8% user, 0.0% nice, 25.2% system, 4.3% interrupt, 56.7% idle
CPU 1: 19.7% user, 0.0% nice, 18.1% system, 9.1% interrupt, 53.1% idle
CPU 2: 23.6% user, 0.0% nice, 17.3% system, 5.9% interrupt, 53.1% idle
CPU 3: 15.4% user, 0.0% nice, 21.7% system, 9.4% interrupt, 53.5% idle
Mem: 513M Active, 1025M Inact, 2057M Wired, 56K Buf, 12G Free
ARC: 539M Total, 270M MFU, 248M MRU, 1661K Anon, 2880K Header, 15M Other
461M Compressed, 1156M Uncompressed, 2.51:1 Ratio
Swap: 1024M Total, 1024M Free -
@stephenw10 Thermal Sensors
Intel Core* CPU on-die thermal sensor -
You have SpeedShift enabled?
Be good to see more of that top output so we know what's generating that CPU load.
-
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 187 ki31 0B 64K CPU2 2 122:26 58.07% [idle{idle: cpu2}] 11 root 187 ki31 0B 64K CPU1 1 122:25 57.91% [idle{idle: cpu1}] 11 root 187 ki31 0B 64K RUN 3 122:21 57.13% [idle{idle: cpu3}] 11 root 187 ki31 0B 64K CPU0 0 124:38 56.37% [idle{idle: cpu0}] 69815 root 24 0 855M 469M select 3 0:57 17.04% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 69815 root 24 0 855M 469M select 3 0:58 16.84% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 69815 root 23 0 855M 469M select 3 0:50 15.64% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 69815 root 23 0 855M 469M select 2 1:05 14.86% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 69815 root 24 0 855M 469M select 2 1:05 14.11% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 12 root -60 - 0B 320K WAIT 3 0:22 10.66% [intr{swi1: netisr 1}] 69815 root 23 0 855M 469M select 0 1:24 9.82% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 69815 root 21 0 855M 469M select 0 1:32 7.52% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 56778 root 21 0 550M 390M bpf 1 0:43 7.41% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i igc1 -i igc0 --dns-mode 0 --local-ne 56778 root 21 0 550M 390M bpf 0 0:52 7.24% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i igc1 -i igc0 --dns-mode 0 --local-ne 0 root -60 - 0B 1808K - 0 0:39 6.15% [kernel{if_io_tqg_0}] 12 root -60 - 0B 320K WAIT 0 0:28 6.06% [intr{swi1: netisr 0}] 0 root -60 - 0B 1808K RUN 1 0:34 6.03% [kernel{if_io_tqg_1}] 0 root -60 - 0B 1808K - 3 0:39 5.93% [kernel{if_io_tqg_3}] 12 root -60 - 0B 320K WAIT 2 0:42 5.76% [intr{swi1: netisr 2}] 0 root -64 - 0B 1808K - 0 0:22 5.58% [kernel{dummynet}] 12 root -60 - 0B 320K WAIT 1 0:33 5.44% [intr{swi1: netisr 3}] 0 root -60 - 0B 1808K - 2 0:49 4.18% [kernel{if_io_tqg_2}] 69815 root 20 0 855M 469M select 0 1:23 2.37% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 69815 root 20 0 855M 469M uwait 3 0:45 0.24% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 56778 root 20 0 550M 390M nanslp 2 0:30 0.21% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i igc1 -i igc0 --dns-mode 0 --local-ne 6263 root 20 0 1411M 136M uwait 1 0:07 0.17% /usr/local/bin/crowdsec -c /usr/local/etc/crowdsec/config.yaml{crowdsec} 6263 root 20 0 1411M 136M kqread 2 0:07 0.12% /usr/local/bin/crowdsec -c /usr/local/etc/crowdsec/config.yaml{crowdsec} 56778 root 20 0 550M 390M uwait 3 0:00 0.12% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i igc1 -i igc0 --dns-mode 0 --local-ne 30798 unbound 20 0 355M 305M kqread 0 0:13 0.08% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound} 2 root -60 - 0B 64K WAIT 2 0:02 0.08% [clock{clock (2)}] 8 root -16 - 0B 16K - 3 0:02 0.08% [rand_harvestq] 2 root -60 - 0B 64K WAIT 3 0:01 0.07% [clock{clock (3)}] 56712 root 20 0 36M 12M kqread 1 0:05 0.07% redis-server: /usr/local/bin/redis-server 127.0.0.1:6379 (redis-server){redis-server} 2 root -60 - 0B 64K WAIT 0 0:02 0.07% [clock{clock (0)}] 37116 root 20 0 14M 4468K CPU1 1 0:01 0.07% top -HaSP 2 root -60 - 0B 64K WAIT 1 0:01 0.06% [clock{clock (1)}] 69815 root 20 0 855M 469M nanslp 1 0:49 0.06% /usr/local/bin/suricata --netmap -D -c /usr/local/etc/suricata/suricata_12484_igc1/suricata.yaml --pidfile /var/run/suric 56778 root 20 0 550M 390M uwait 0 0:00 0.05% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i igc1 -i igc0 --dns-mode 0 --local-ne
-
@stephenw10 Yes Speed Shift enabled
-
Ok so all Suricata load basically. If you disable it as a test without any other changes do you get full bandwidth?
-
@stephenw10 Yes, exactly, if suricata disabled speed test show as should be about 95% of max capacity as set in traffic shaping limiters!
-
Hmm. Do you see the same in legacy and in-line mode?
-
@stephenw10 Use in-line mode, don't use legacy mode. Do you want to test in legacy mode?
-
Yes, test legacy mode. See if there is any change.
-
@stephenw10 In legacy mode looks like speed as should be