Supported 10GB NICs?



  • Hello all.
    I have slowly been upgrading to a 10GB network, first by purchasing a Quanta LB6M 10GB switch and installing Mellanox ConnectX-2 single port cards in my PCs and Unraid Server. All good  there, but I decided to purchase and add a dual port Mellanox ConnectX-2  NIC to my pfSense box. Of course pfsense didnt recognize the card, and upon further investigation it seems I need to do a whole work around to get the drivers on pfSense. I found this explanation here….

    http://unix.stackexchange.com/questions/272329/pfsense-with-mellanox-connectx-2-10gbit-nics

    I'm not familiar with FreeBSD and have tried to give a try and install it on a virtual machine and compile the drivers, but I just dont have the experience to get this right.

    I was hoping to find out what type of 10GB NICs are natively supported by pfSense? I've read that Chelsio NICs are good, I have my eye on a Chelsio CC2-N320E-SR NIC.
    Also looks like many Intel NICs are supported like the Intel X520-DA1/2.
    Any input would be great.
    Thanks!



  • the ones in the store.pfsense.org are probably supported



  • The only one I see in the store is the Chelsio T520-SO-CR. But I'm looking to buy something used on eBay under $150.



  • Isn't pfSense hardware compatibility just dependent on FreeBSD?

    https://www.freebsd.org/relnotes/CURRENT/hardware/support.html#ethernet



  • On 2.3.x to use a mellanox connectx-2 i had to compile the driver on another machine then import to pfsense.

    I didn't tried on a 2.4 box.



  • Hello,

    the thread isn't so old, so I would like to join the discussion here. I think about the setup of 2 virtual or native pfSense boxes on some new HPe Server with 4 x Ten Gig NICs. This box would be bigger than XG-1541 1U but it should work I assume. But whats about the I/O requirements to achive 30-40 Gbps throughput in the so called backend?

    Specs:

    • I would size the servers with 8-12 Cores and a lot of RAM + SSDs maybe NVMe

    • Second CPU depends on virtualization

    • Ethernet would be some Broadcom / Intel pro

    • 2 Server = one cluster (pfSense sync)

    • Connections betweens pfSense and switches with Ten Gig Ethernet OR Ten Gig Fibre

    Q's:

    • Would it be better to setup them up native on the server hardware or use a hypervisor like ESX and make a VM?

    • Did anybody ran such a setup already?

    • Are fiber NICs supported?

    I really hope somebody could answer my questions, we think about this since some months already and have no clue if this would even be possible.

    Kind regards,

    pfs-pdf



  • @pfs-pdf:

    But whats about the I/O requirements to achive 30-40 Gbps throughput in the so called backend?

    wait a couple of years … thats not possible using software at this point in time



  • @heper:

    @pfs-pdf:

    But whats about the I/O requirements to achive 30-40 Gbps throughput in the so called backend?

    wait a couple of years … thats not possible using software at this point in time

    It seems you know more than me, could you please tell me why it wouldn't be possible?
    Don't get me wrong, I know you need a really well configured system to achieve this, even you use enterprise firewalls but if pfSense support Ten Gig already what is the troughput or better what is the bottleneck?


  • Netgate

    better what is the bottleneck?

    PPS