Load updated Intel IX module to get 10Gbps
-
@stephenw10
Hi there!
I was not successful here this morning, had to revert and restore backup. Seems the routes were not updating, same for firewall rules.
All back running. What I saw as warning output above in the webUI made me think the issue with the limitation might be something else:Filter Reload There were error(s) loading the rules: pfctl: interface ix0 bandwidth limited to 4294967295 bps because selected scheduler is 32-bit limited - The line in question reads [0]: @ 2024-03-07 07:38:45
So I tried disabling traffic shaper completely.
No change though.But most important, what did go wrong with interface change? I tried in UI to re-assign, also from cmd locally...
-
Ah, looks like you have some limiters set on that and the link speed is above what it can handle. Which itself is interesting.
What traffic shaping do you have enabled there?
Do you have an interface with the bandwidth set as a percentage?
-
@stephenw10
I disabled all traffic shaper queues this morning to test after I saw that output.
No change so far, but I haven't rebooted since, is it required? -
No you should not need to reboot.
If you reload the ruleset in Status > Filter Reload does it regenerate the error?
If so that value must still be in the config somewhere.
-
@stephenw10 said in Load updated Intel IX module to get 10Gbps:
With the included ix NICs I assume? That's about what I'd expect. Which is why the reports of 25Gbps with Mellanox NICs are so surprising.
Actually, this was mostly tested between two interfaces on a Chelsio T540-SO-CR expansion card. I do recall testing between the Chelsio interfaces and onboard ix interfaces at a time or two as well and seeing similar speeds (i.e. no major increases or decreases).
-
@stephenw10
I tried to do the reload while tail -f on /var/log/system.log
I only saw:
Mar 11 14:40:29 vm12 check_reload_status[331]: Reloading filterThat's where it had logged the error before I bet?
-
Yes it would show in the system log. It would also show on the filter reload page if the error was regenerated.
-
@stephenw10 So no errors here :)
But problem not solved either.https://hastebin.com/share/visalixawa.bash
I am confused why it still generates ALTQ queues?
Anyway, because of a power outage the firewall might reboot tonight (unless UPS holds up long enough)
-
It doesn't. The script is simply logging that it has reached that section where it would be creating queues if they were configured.
-
@stephenw10
Good!The question remains on how to change NICs while maintaining the NAT and firewall rules etc?
Did I not use the right procedure?
-
I don't know. It should be trivial since all the interface config is abstracted from the NIC. You just assign WAN to the new ix NIC and all the settings follow it.
The traffic shaping is potentially an issue because it tries to detest the NIC link speed and obviously that can/will change.
-
@stephenw10 As I disabled traffic shaping that should not be an issue anymore.
The new interface has a different name, mce0 and mce1 but that shouldn't matter?I'll try again at an appropriate time where down-time is possible!
Thanks for the quick replies!
-
Nope the name shouldn't matter, that's the point of abstracting it. You can import a config into completely different hardware and just reassign the interfaces to the existing NICs.
-
@stephenw10 That's what I thought.
The question is (as I asked above) the proper order:
Right now ix0 is WAN, ix1 is LAN
I would probably want to assign LAN to mce1 and apply. Then physically connect the fiber to mce1.
Then ix0 to mc0 can be done once webUI can be accessed again... -
Yes that should work. You should also be able to do both at the same time.
I would want to be sure I had some out of band access whilst making that change. That could be via the console or assigning an interface for management access.
-
@stephenw10 That's kind of what I tried, but it wouldn't work, as I mentioned above it would not apply fw rules, NAT etc.
So I restored from backup...
When doing such things I always have a monitor + keyboard ready in the rack for local console access ;) -
@stephenw10
I managed to change interfaces.
Still no improvement, I even tried to remove some advanced settings:
I guess time to test some tuning, but given that we will need to restart each time, I need to do it early morning or during next maintenance window...Any suggestions?
I still have some things in /boot/loader.conf.localnet.inet.tcp.tso="0" if_ix_updated_load="YES" hw.ix.flow_control="0" hw.ix.num_queues=40 hw.ix.enable_aim=1 hw.ix.max_interrupt_rate=30000 kern.ipc.nmbclusters="1000000" kern.ipc.nmbjumbop="524288" machdep.hyperthreading_intr_allowed=1
Might make sense to remove, reboot and try?
-
Ok so you're using the Mellanox (mce) NICs now? And still seeing ~4Gbps? Per core CPU usage is still low when testing?
-
[ 5] 37.00-38.00 sec 274 MBytes 2.30 Gbits/sec 812 462 KBytes [ 5] 38.00-39.00 sec 297 MBytes 2.49 Gbits/sec 479 624 KBytes [ 5] 39.00-40.00 sec 269 MBytes 2.26 Gbits/sec 724 632 KBytes [ 5] 40.00-41.00 sec 265 MBytes 2.22 Gbits/sec 685 441 KBytes [ 5] 41.00-42.00 sec 262 MBytes 2.20 Gbits/sec 65 510 KBytes [ 5] 42.00-43.00 sec 248 MBytes 2.08 Gbits/sec 770 553 KBytes [ 5] 43.00-44.00 sec 245 MBytes 2.06 Gbits/sec 971 96.2 KBytes [ 5] 44.00-45.00 sec 285 MBytes 2.39 Gbits/sec 296 607 KBytes [ 5] 45.00-46.00 sec 262 MBytes 2.20 Gbits/sec 1108 454 KBytes [ 5] 46.00-47.00 sec 260 MBytes 2.18 Gbits/sec 690 481 KBytes [ 5] 47.00-48.00 sec 257 MBytes 2.15 Gbits/sec 582 545 KBytes [ 5] 48.00-49.00 sec 275 MBytes 2.31 Gbits/sec 253 568 KBytes [ 5] 49.00-50.00 sec 248 MBytes 2.08 Gbits/sec 1163 399 KBytes [ 5] 50.00-51.00 sec 264 MBytes 2.21 Gbits/sec 358 453 KBytes [ 5] 51.00-52.00 sec 274 MBytes 2.30 Gbits/sec 175 446 KBytes [ 5] 52.00-53.00 sec 267 MBytes 2.24 Gbits/sec 864 457 KBytes [ 5] 53.00-54.00 sec 283 MBytes 2.37 Gbits/sec 250 496 KBytes [ 5] 54.00-55.00 sec 276 MBytes 2.32 Gbits/sec 326 542 KBytes [ 5] 55.00-56.02 sec 272 MBytes 2.25 Gbits/sec 866 552 KBytes [ 5] 56.02-57.00 sec 268 MBytes 2.29 Gbits/sec 629 554 KBytes [ 5] 57.00-58.01 sec 271 MBytes 2.24 Gbits/sec 122 594 KBytes [ 5] 58.01-59.00 sec 247 MBytes 2.10 Gbits/sec 565 464 KBytes [ 5] 59.00-60.00 sec 279 MBytes 2.34 Gbits/sec 864 457 KBytes [ 5] 60.00-61.00 sec 260 MBytes 2.18 Gbits/sec 864 464 KBytes [ 5] 61.00-62.00 sec 257 MBytes 2.16 Gbits/sec 776 506 KBytes [ 5] 62.00-63.02 sec 268 MBytes 2.21 Gbits/sec 174 562 KBytes [ 5] 63.02-64.01 sec 256 MBytes 2.15 Gbits/sec 873 158 KBytes [ 5] 64.01-65.00 sec 249 MBytes 2.12 Gbits/sec 770 428 KBytes [ 5] 65.00-66.00 sec 253 MBytes 2.12 Gbits/sec 619 481 KBytes [ 5] 66.00-67.02 sec 280 MBytes 2.31 Gbits/sec 603 520 KBytes [ 5] 67.02-68.02 sec 250 MBytes 2.10 Gbits/sec 779 428 KBytes [ 5] 68.02-69.00 sec 261 MBytes 2.22 Gbits/sec 253 454 KBytes [ 5] 69.00-70.00 sec 264 MBytes 2.22 Gbits/sec 632 484 KBytes [ 5] 70.00-71.00 sec 256 MBytes 2.15 Gbits/sec 864 514 KBytes [ 5] 71.00-72.00 sec 256 MBytes 2.14 Gbits/sec 772 564 KBytes [ 5] 72.00-73.00 sec 255 MBytes 2.15 Gbits/sec 732 425 KBytes [ 5] 73.00-74.00 sec 268 MBytes 2.25 Gbits/sec 652 474 KBytes
This seems even worse right now.
CPU usage is not noticibly higher than no huge transfer.Maybe it's about changing some settings and try now?
-
So.... not using Mellanox NICs?