Abysmal Performance after pfSense hardware upgrade
-
@Gblenn said in Abysmal Performance after pfSense hardware upgrade:
@8ayM said in Abysmal Performance after pfSense hardware upgrade:
@stephenw10
Explains the Module not supported message I'm getting at the console for using a 10Gtek Ubiquity 10G SFP+ module.Might look at some new DAC's
Anyway restored old config, made the hardware offloading changes as mentioned above, and things are looking better. Also remove traffic shaping as I'm struggling to image hitting that limit short of benchmarking.
So it seems the one thing that made the difference was that you turned off HW checksum offload (the first item in the list)??
@SteveITS said in Abysmal Performance after pfSense hardware upgrade:
@8ayM said in Abysmal Performance after pfSense hardware upgrade:
So is your thought/suggestion to check/disable these feature
Yes we check the three "offloading" checkboxes. Those need a restart.
Is there no benefit at all having any of the HW offloading active, even with e.g. X520 NIC? I think I have always had all three turned on, on both my sites (other site has i211 NIC's),
I made those changes yes, but the item that appears to have resolved my issue is I stopped using the 10G DAC's and installed SFP+ Modules and OM4 fiber cable.
I tried 2 x Cisco SFP-H10GB-CU3M, and 5 various 10Gtek cables all SFP+
The only DAC that looks to be working is a 1' CAB-10ZGSFP-P0.3M.
So it looks like I might be the proud owner of 7 questionable SPF+ DACs
Results with the 1' / 0.3m DAC currently in use
-
@8ayM said in Abysmal Performance after pfSense hardware upgrade:
I tried 2 x Cisco SFP-H10GB-CU3M, and 5 various 10Gtek cables all SFP+
The only DAC that looks to be working is a 1' CAB-10ZGSFP-P0.3M.
So it looks like I might be the proud owner of 7 questionable SPF+ DACs
Quite the bummer! And I guess it's clear that it is the X553 that doesn't like the DAC's, and not the switch (which sounds unlikely)?
[EDIT] A bit of googling reveals some problems with linking up X553. Checking out a few of them it seems to boil down to driver updates. And I found a reference to OPNsense working fine with X553. https://vyos.dev/T5619
But I also found this on servethehome.
https://forums.servethehome.com/index.php?threads/10gbit-interface-compatibility-intel-x553-mellanox-connectx-2.21477/
Look at the very bottom where someone has been able to make a parameter change in the kernel module to make it work by setting "allow_unsupported_sfp". Not sure if that would change anything given that you get a link up, it's just the performance that sucks. But I remember having seen a similar thing, but wrt firmware in OEM Intel cards, back when I was looking to upgrade my machine.I have no idea if this possibility even exists in pfsense, perhaps @SteveITS or @stephenw10 knows?
-
You can set 'allow unsupported SFP' but that won't help here. It's already allowing the module it's just unable to read or set the link speed. As far as I know there's nothing we c an do about that.
@Gblenn What CPU are you using that's passing 8Gbps with Suricata enabled?
-
@stephenw10 said in Abysmal Performance after pfSense hardware upgrade:
You can set 'allow unsupported SFP' but that won't help here. It's already allowing the module it's just unable to read or set the link speed. As far as I know there's nothing we c an do about that.
Yeah, that's what I thought, as the link is actually up. But it seems to differ depending on HW connected, given that at least one DAC is working. And as others have reported, with the right drivers it seems to work.
@Gblenn What CPU are you using that's passing 8Gbps with Suricata enabled?
It's an i5-11400, but I am running suricata in legacy mode. I can't remember exactly, but I believe I got around 3.5Gbps running inline mode.
And I have it virtualized on Proxmox, set to host CPU with 4 cores assigned. -
Ah, yes so significantly more powerful than any C3K CPU.
It is interesting that you see no interrupt load though, I agree. I suspect you would see that with Suricata in in-line mode.
You do see the expected kernel mode iflib task queue processes though. That's where the traffic and pf load usually appears.
-
@stephenw10 Indeed it is, and with 12 cores I am able to run a few other things as separate VM's without affecting throughput, (NtopNG being one of them).
Are you thinking that if I shift to inline mode for Suricata, I would start seeing interrupt going up? @8ayM doesn't seem to have Suricata activated but perhaps Ntop would have the same effect?
BTW, I changed the HW offloads this morning (none activated now) and although time of day may affect speedtest results, I did manage to get similar speeds just now.
Also tried disabling Suricata but I don't see any difference in performance...
-
Mmm, the interrupt loading is interesting. What I expect to see is the task queue group values as you are seeing them.
I have to think it's ntop putting the NIC in promiscuous mode doing something there. I don't see that on a C3K system here:
last pid: 39097; load averages: 0.67, 0.30, 0.21 up 2+08:27:39 21:29:14 340 threads: 6 running, 290 sleeping, 44 waiting CPU 0: 5.5% user, 0.0% nice, 20.0% system, 0.0% interrupt, 74.5% idle CPU 1: 2.4% user, 0.0% nice, 10.2% system, 0.0% interrupt, 87.5% idle CPU 2: 3.1% user, 0.0% nice, 5.5% system, 0.0% interrupt, 91.4% idle CPU 3: 3.1% user, 0.0% nice, 5.1% system, 0.0% interrupt, 91.8% idle Mem: 98M Active, 215M Inact, 521M Wired, 3002M Free ARC: 133M Total, 33M MFU, 93M MRU, 1121K Anon, 976K Header, 5440K Other 99M Compressed, 244M Uncompressed, 2.47:1 Ratio Swap: 1024M Total, 1024M Free PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 187 ki31 0B 64K CPU2 2 55.4H 90.08% [idle{idle: cpu2}] 11 root 187 ki31 0B 64K RUN 3 55.4H 89.88% [idle{idle: cpu3}] 11 root 187 ki31 0B 64K CPU1 1 55.4H 85.69% [idle{idle: cpu1}] 11 root 187 ki31 0B 64K CPU0 0 55.3H 76.17% [idle{idle: cpu0}] 0 root -60 - 0B 1648K - 2 0:03 4.75% [kernel{if_io_tqg_2}] 0 root -60 - 0B 1648K - 1 0:02 3.55% [kernel{if_io_tqg_1}] 0 root -60 - 0B 1648K - 3 0:04 2.29% [kernel{if_io_tqg_3}] 10536 root 4 0 84M 33M RUN 3 0:00 2.06% /usr/local/bin/python3.11 /usr/local/bin/speedtest{p 10536 root 56 0 84M 33M usem 1 0:01 1.87% /usr/local/bin/python3.11 /usr/local/bin/speedtest{p
Though it's also clearly not anywhere near the same throughput.
-
Would you want me to test something on my unit?
I just finished updating to the 5.6.x build so I may have some slightly different results over factory ntopng which is usually behind
-
Latest test running top -HaSP
ntopng Community v.5.6.240304 rev.0 running in background. It had been disabled for most of our testing after it was suggested to do so -
Did you try testing with ntop-ng disabled? Also try with bandwidthd and darkstat disabled.
-
@stephenw10 said in Abysmal Performance after pfSense hardware upgrade:
Did you try testing with ntop-ng disabled? Also try with bandwidthd and darkstat disabled.
I'll try again when I get home
-
@stephenw10 said in Abysmal Performance after pfSense hardware upgrade:
Did you try testing with ntop-ng disabled? Also try with bandwidthd and darkstat disabled.
As requested
Preformed the test disabling one at a time announcing which ones. Then a final test again Turing all back on
https://streamable.com/77ahrq -
Hmm, so still interrupt load with all three disabled? There must be something else set there. You have any custom sysctls set?
-
@stephenw10 said in Abysmal Performance after pfSense hardware upgrade:
Hmm, so still interrupt load with all three disabled? There must be something else set there. You have any custom sysctls set?
Not that I recall, then again this has been an evolution of my early usage of pfSense which is going for about 15 years at this point. Always seemed to complicated to start over, and over time that feeling continued to grow.
Here is my current System Tunables:
-
Hmm, nothing unexpected there. You have any custom loader values in /boot/loader.conf.local?
What other packages do you have installed?
-
[2.7.2-RELEASE][admin@pfSense-Edge01.scs.lan]/boot: vi loader.conf
kern.cam.boot_delay=10000
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
kern.ipc.nmbclusters="1000000"
kern.ipc.nmbjumbo9="524288"
kern.ipc.nmbjumbop="524288"
opensolaris_load="YES"
zfs_load="YES"
opensolaris_load="YES"
zfs_load="YES"
kern.cam.boot_delay=10000
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
kern.ipc.nmbclusters="1000000"
kern.ipc.nmbjumbo9="524288"
kern.ipc.nmbjumbop="524288"
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
cryptodev_load="YES"
zfs_load="YES"
boot_serial="NO"
autoboot_delay="3"
hw.hn.vf_transparent="0"
hw.hn.use_if_start="1"
net.link.ifqmaxlen="128"
machdep.hwpstate_pkg_ctrl="1"
net.pf.states_hashsize="4194304" -
No loader.conf.local file though?
-
Not that I'm seeing. I could create one if there are persistent items needed to be added.
-
Ok good, nothing unexpected hiding there.
Is Snort running on the interfaces passing traffic during the test? I don't see it in any of your output.
The interrupt load shown really seems to line up with the ntop load though. It makes me wonder if if something there is actually still enabled.
-
I'm not running snort ATM, but I do have pfBlockerNG running