Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    pfSense 2.7 on Intel Xeon D-17xx SoC: SFP28 working?

    Scheduled Pinned Locked Moved Hardware
    18 Posts 4 Posters 2.3k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T
      tman222
      last edited by

      I had the opportunity to install pfSense 2.7 CE on an Intel Xeon D-1718T based system recently and can confirm that the SFP28 ports are supported. They are recognized as Ethernet Connection E823-L for SFP and use the iflib ice driver. In my case I see two interfaces in pfSense, ice0 and ice1. I have not been able to test them out yet, but the two Intel SFP+ transceivers I used previously in SFP+ cages appear to recognized just fine. Hope this helps.

      1 Reply Last reply Reply Quote 1
      • T
        tman222
        last edited by tman222

        I wanted to add some additional notes after getting a chance to experiment with with this platform a bit more:

        1. As of pfSense Plus 23.05.1, one still needs to add ice_ddp_load="YES" into the loader.conf.local file, otherwise the E800 adapters will be limited in performance. See here for more details: https://reviews.freebsd.org/rS361541. Once the ice_dpp.ko module was loaded the warnings about safe mode operation and single queue limitation went away.
        2. The Intel SFP+ transceivers I installed in the SFP28 cages were able to establish a link at 10Gbit/s (fiber) just fine. I also experimented with a handful of SFP+ to RJ45 adapters and these were able to establish a at 10Gbit/s (copper) just fine when installed in the SFP28 cages. One exception was a SFP+ to RJ45 adapter that also supports NBASE-T speeds: It did not appear to work, i.e. no link was established either 10Gbit/s or 1Gbit/s (I wasn't able to test 2.5Gbit/s or 5Gbit/s).

        Hope this helps.

        P 1 Reply Last reply Reply Quote 1
        • P
          phloggu @tman222
          last edited by

          @tman222
          Thank you for your report!

          E 1 Reply Last reply Reply Quote 0
          • E
            eracerxrs @phloggu
            last edited by

            @phloggu

            Just wanted to add that I've been trying to get the full 25Gbps since 2.7 has been released, but is not yet possible.

            No matter if I get it to connect at 10Gbps using the drivers that come with 2.7, or compiling the intel driver on FreeBSD 14 beta and using that on 2.7 to get it to "say" it is connecting at 25 Gbps, the max throughput is always 6 Gbps.

            According to dmesg output the problem seems to be ice_ddp module failing to load and it goes into safe mode.

            I can get the full 25Gbps in pfsense 2.6 when compiling the drivers under FreeBSD 12.3.

            I am hoping once FreeBSD 14 is finally released at the end of October I'll be able to compile the intel drivers that work.

            T 1 Reply Last reply Reply Quote 0
            • T
              tman222 @eracerxrs
              last edited by

              @eracerxrs said in pfSense 2.7 on Intel Xeon D-17xx SoC: SFP28 working?:

              @phloggu

              Just wanted to add that I've been trying to get the full 25Gbps since 2.7 has been released, but is not yet possible.

              No matter if I get it to connect at 10Gbps using the drivers that come with 2.7, or compiling the intel driver on FreeBSD 14 beta and using that on 2.7 to get it to "say" it is connecting at 25 Gbps, the max throughput is always 6 Gbps.

              According to dmesg output the problem seems to be ice_ddp module failing to load and it goes into safe mode.

              I can get the full 25Gbps in pfsense 2.6 when compiling the drivers under FreeBSD 12.3.

              I am hoping once FreeBSD 14 is finally released at the end of October I'll be able to compile the intel drivers that work.

              Hi @eracerxrs - did you try putting ice_ddp_load="YES" into the loader.conf.local file? When I did this, the ice_dpp.ko module was loaded and the warnings about safe mode went away. Please also see: https://reviews.freebsd.org/rS361541

              Hope this helps.

              E 1 Reply Last reply Reply Quote 0
              • E
                eracerxrs @tman222
                last edited by

                Hey @tman222,

                Yeah, I have:

                ice_ddp_load= "YES"
                if_ice_load="YES"
                

                in my loader.conf file. And it's still going into safe mode. Where am I going wrong...

                Where are you getting the ice_ddp.ko and if_ice.ko files from?
                Are you placing them in the /boot/modules folder?
                You're getting the full 25Gbps throughput?

                T 1 Reply Last reply Reply Quote 0
                • T
                  tman222 @eracerxrs
                  last edited by tman222

                  @eracerxrs said in pfSense 2.7 on Intel Xeon D-17xx SoC: SFP28 working?:

                  Hey @tman222,

                  Yeah, I have:

                  ice_ddp_load= "YES"
                  if_ice_load="YES"
                  

                  in my loader.conf file. And it's still going into safe mode. Where am I going wrong...

                  Where are you getting the ice_ddp.ko and if_ice.ko files from?
                  Are you placing them in the /boot/modules folder?
                  You're getting the full 25Gbps throughput?

                  Hi @eracerxrs - make sure you use the loader.conf.local file instead of loader.conf.

                  https://docs.netgate.com/pfsense/en/latest/config/advanced-tunables.html#managing-loader-tunables

                  All you should need to add is ice_ddp_load="YES". In my case, I had to put that line at the very top of the loader.conf.local file for the DDP module to be loaded a boot. The module is included with pfSense, I didn't have to install any additional drivers.

                  I have not been able to test at 25Gbps, but I was getting full throughput at 10Gbps.

                  Hope this helps.

                  E 1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator
                    last edited by

                    Both using the D-1718T?

                    E 1 Reply Last reply Reply Quote 0
                    • E
                      eracerxrs @tman222
                      last edited by

                      @tman222 Thanks for taking me through this...

                      So it sounds like you're actually running pfSense Plus (not CE 2.7) correct? Because my 2.7 CE install doesn't have the necessary drivers included, so I was trying to compile the intel drivers through the FreeBSD 14 beta like I had to do previously for pfSense CE 2.6

                      Realizing this has made me take a look at pfSense Plus. I always thought it was only available for Netgate branded devices. Though I never really had a need for any more advanced features than was provided by CE. I'm grateful they have a free home lab license!

                      For anyone else who stumbles on this thread, along with pfSense plus and ice_ddp_load="YES" in the loader.conf.local file:

                      In the GUI:
                      ENABLE Hardware TCP Segmentation (uncheck disable)
                      ENABLE Large Receive Offloading. (uncheck disable)

                      [In past versions, I also had these overrides in my loader.conf.local file, to achieve max throughput, but so far is looks like they aren't necessary:

                      dev.ice.0.iflib.override_qs_enable="1"
                      dev.ice.0.iflib.override_ntxds="4096"
                      dev.ice.0.iflib.override_nrxds="4096"
                      dev.ice.0.iflib.override_nrxqs="8"
                      dev.ice.0.iflib.override_nrxqs="8"
                      

                      ]

                      1 Reply Last reply Reply Quote 0
                      • E
                        eracerxrs @stephenw10
                        last edited by eracerxrs

                        @stephenw10

                        I'm actually running the D-1736NT, pretty much the same E823-L nic, just double the cores I think.

                        1 Reply Last reply Reply Quote 1
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          So what throughput are you seeing there with 23.05.1?

                          1 Reply Last reply Reply Quote 0
                          • T
                            tman222
                            last edited by

                            I'm a little surprised that it only works with pfSense Plus since pfSense Plus 23.05.1 and pfSense CE 2.7.0 are both based on FreeBSD 14.

                            https://docs.netgate.com/pfsense/en/latest/releases/versions.html

                            If you start with a fresh install of CE 2.7.0, is everything recognized then? When I upgraded to the Xeon D-1718T based system, I installed pfSense CE 2.7.0 first and the moved pfSense Plus 23.05.1. However, I believe that the 823-L interfaces were also recognized fine in 2.7.0.

                            In any case, were you able to get things working properly in 23.05.1?

                            1 Reply Last reply Reply Quote 0
                            • E
                              eracerxrs
                              last edited by eracerxrs

                              In 23.05.1 I am currently getting 23.5 Gbps using iperf3 as both client and server.
                              If one doesn't enable TSO and LRO as I mentioned above there ends up being suboptimal asymmetric throughput.

                              In 2.7 CE fresh out of the box there is a default driver that establishes a link.
                              "dmesg | grep ice0" says there's a 25GbE connection and looks okay, but it's actually 4.6 Gbps testing with iperf3.
                              "dmesg | grep ddp" says the ice_ddp.ko cannot be found/loaded and is running in safe mode. (even with ice_ddp_load="YES" in loader.conf.local)

                              If you compare the /boot/kernel/ directory between CE and Plus, Plus has far more extensive module support.

                              1 Reply Last reply Reply Quote 0
                              • stephenw10S
                                stephenw10 Netgate Administrator
                                last edited by

                                Yes there was an effort to 'slim down' the CE build and some modules were removed. That should probably be there though given that the driver itself is present. You might open a bug/feature request for that: redmine.pfsense.org

                                23.5Gbps is impressive. Are you running one end of that on pfSense itself?

                                Steve

                                E 1 Reply Last reply Reply Quote 0
                                • E
                                  eracerxrs @stephenw10
                                  last edited by eracerxrs

                                  @stephenw10

                                  If you mean "on pfSense itself":
                                  I am running iperf3 from the CLI on the pfSense VM instance utilizing the NIC directly passed through to it and the drivers provided by pfSense 23.05.1.
                                  It is connecting to a separate machine on the same subnet, so pfSense is not doing any routing.
                                  With 8 Xeon D-1736NT threads/cpus allocated to the VM, the cpu load is at about 85% just to handle the 23.5Mbps of switching.

                                  I'm about to swap in this box to be my main gateway , and then I'll be able to see what is possible in terms of layer 3 routing. I'm sure it'll be less than line speed and/or require up to the full 18 threads available on the CPU.

                                  1 Reply Last reply Reply Quote 1
                                  • stephenw10S
                                    stephenw10 Netgate Administrator
                                    last edited by

                                    Hmm, still impressive. Surprising that pfSense will server/sink that much traffic directly. It's usually faster routing.

                                    1 Reply Last reply Reply Quote 0
                                    • E
                                      eracerxrs
                                      last edited by

                                      I just wanted to give an update regarding achieving full throughput on 2.7 CE especially given the recent pfsense plus licensing debacle:

                                      I was able to attain the full 23.5 Gbps throughput on 2.7 CE straight from a fresh install and the aforementioned enabling of the hardware offloads by enabling SR-IOV on the Proxmox host and passing into pfSense the virtual functions (virtual nics). In this situation, pfSense uses the iavf driver which is included in CE and precludes the need for if_ice.ko and ice_ddp.ko.

                                      On a related note: I was able to hit 31 Gbps on pfSense through an e810-cam2 (which uses the same driver setup as the e823. Though I've only just started playing with this 100GbE nic, so 31 is the starting point.

                                      1 Reply Last reply Reply Quote 2
                                      • First post
                                        Last post
                                      Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.