pfSense 2.7 on Intel Xeon D-17xx SoC: SFP28 working?
-
Both using the D-1718T?
-
@tman222 Thanks for taking me through this...
So it sounds like you're actually running pfSense Plus (not CE 2.7) correct? Because my 2.7 CE install doesn't have the necessary drivers included, so I was trying to compile the intel drivers through the FreeBSD 14 beta like I had to do previously for pfSense CE 2.6
Realizing this has made me take a look at pfSense Plus. I always thought it was only available for Netgate branded devices. Though I never really had a need for any more advanced features than was provided by CE. I'm grateful they have a free home lab license!
For anyone else who stumbles on this thread, along with pfSense plus and ice_ddp_load="YES" in the loader.conf.local file:
In the GUI:
ENABLE Hardware TCP Segmentation (uncheck disable)
ENABLE Large Receive Offloading. (uncheck disable)[In past versions, I also had these overrides in my loader.conf.local file, to achieve max throughput, but so far is looks like they aren't necessary:
dev.ice.0.iflib.override_qs_enable="1" dev.ice.0.iflib.override_ntxds="4096" dev.ice.0.iflib.override_nrxds="4096" dev.ice.0.iflib.override_nrxqs="8" dev.ice.0.iflib.override_nrxqs="8"
]
-
I'm actually running the D-1736NT, pretty much the same E823-L nic, just double the cores I think.
-
So what throughput are you seeing there with 23.05.1?
-
I'm a little surprised that it only works with pfSense Plus since pfSense Plus 23.05.1 and pfSense CE 2.7.0 are both based on FreeBSD 14.
https://docs.netgate.com/pfsense/en/latest/releases/versions.html
If you start with a fresh install of CE 2.7.0, is everything recognized then? When I upgraded to the Xeon D-1718T based system, I installed pfSense CE 2.7.0 first and the moved pfSense Plus 23.05.1. However, I believe that the 823-L interfaces were also recognized fine in 2.7.0.
In any case, were you able to get things working properly in 23.05.1?
-
In 23.05.1 I am currently getting 23.5 Gbps using iperf3 as both client and server.
If one doesn't enable TSO and LRO as I mentioned above there ends up being suboptimal asymmetric throughput.In 2.7 CE fresh out of the box there is a default driver that establishes a link.
"dmesg | grep ice0" says there's a 25GbE connection and looks okay, but it's actually 4.6 Gbps testing with iperf3.
"dmesg | grep ddp" says the ice_ddp.ko cannot be found/loaded and is running in safe mode. (even with ice_ddp_load="YES" in loader.conf.local)If you compare the /boot/kernel/ directory between CE and Plus, Plus has far more extensive module support.
-
Yes there was an effort to 'slim down' the CE build and some modules were removed. That should probably be there though given that the driver itself is present. You might open a bug/feature request for that: redmine.pfsense.org
23.5Gbps is impressive. Are you running one end of that on pfSense itself?
Steve
-
If you mean "on pfSense itself":
I am running iperf3 from the CLI on the pfSense VM instance utilizing the NIC directly passed through to it and the drivers provided by pfSense 23.05.1.
It is connecting to a separate machine on the same subnet, so pfSense is not doing any routing.
With 8 Xeon D-1736NT threads/cpus allocated to the VM, the cpu load is at about 85% just to handle the 23.5Mbps of switching.I'm about to swap in this box to be my main gateway , and then I'll be able to see what is possible in terms of layer 3 routing. I'm sure it'll be less than line speed and/or require up to the full 18 threads available on the CPU.
-
Hmm, still impressive. Surprising that pfSense will server/sink that much traffic directly. It's usually faster routing.
-
I just wanted to give an update regarding achieving full throughput on 2.7 CE especially given the recent pfsense plus licensing debacle:
I was able to attain the full 23.5 Gbps throughput on 2.7 CE straight from a fresh install and the aforementioned enabling of the hardware offloads by enabling SR-IOV on the Proxmox host and passing into pfSense the virtual functions (virtual nics). In this situation, pfSense uses the iavf driver which is included in CE and precludes the need for if_ice.ko and ice_ddp.ko.
On a related note: I was able to hit 31 Gbps on pfSense through an e810-cam2 (which uses the same driver setup as the e823. Though I've only just started playing with this 100GbE nic, so 31 is the starting point.