Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    PfSense on a 2 NIC NUC

    Scheduled Pinned Locked Moved Hardware
    13 Posts 9 Posters 20.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      jwt Netgate
      last edited by

      I recently noticed that someone is building an Ethernet adapter for an i5 NUC.

      GORITE is an "Intel Innovation Partner" who mostly carries alternative lids and cables for several Intel NUCs.

      Among their products is a NGFF (M.2) Ethernet card and associated cable that will fit a "Maple Canyon" NUC.
      http://www.gorite.com/gigabit-ethernet-dongle-for-maple-canyon-5th-gen-core-i3-i5

      At Netgate, we test things so you don't have to. Normally, I run a SG-4860 at home, but since I wanted to see if a dual Ethernet NUC was viable, I ordered an i5 "Maple Canyon" NUC (NUC5i5MYHE), 16GB RAM, a 256GB SSD and the GORITE Ethernet card. While this makes for a "too large" pfSense install, in a later post, I'll be moving to pfSense running under bhyve with PCI-passthru.

      This is the NUC http://i.imgur.com/8qqLXqY.jpg you want. There is an i3 model which runs an i3-5010U, but 'Why?' The i5 version of the Maple Canyon NUC uses an i5-5300U. Both the i3 and i5 used here are 5th generation 'Core' 2C/4T CPUs, and support VT-x, VT-d, and AVX 2.0.

      VT-d will be important later, when I move the whole setup to bhyve with PCI passthrough. AVX 2.0 will be important when I eventually bring IPS mode Suricata to bear, with Hyperscan support.

      Here is a shot of the Ethernet card and RAM installed

      The SSD just slides in, it's tool-less. Make sure you get it all the way in.

      When everything is installed, the second Ethernet is nice and sanitary.

      pfSense software version 2.4 goes right on this system. Our UEFI support in 2.4.0 is identical to that found in FreeBSD 11.0. Since I have 16GB RAM and a 256GB SSD, I installed to ZFS. As you can see in this screenshot, the second card is a Realtek 8168/8111. That's a bit of a bummer, as it will limit my performance at home to around 600Mbps, and I have a 1Gbps/1Gbps link.

      GZJFl2B.jpg
      GZJFl2B.jpg_thumb
      T519kaq.jpg
      T519kaq.jpg_thumb
      CZNJ2xl.jpg
      CZNJ2xl.jpg_thumb

      1 Reply Last reply Reply Quote 0
      • W
        whosmatt
        last edited by

        @jwt:

        As you can see in this screenshot, the second card is a Realtek 8168/8111. That's a bit of a bummer, as it will limit my performance at home to around 600Mbps, and I have a 1Gbps/1Gbps link.

        What about just using the Intel NIC with VLANs and skip the Realtek altogether, or use it for a less demanding subnet?  I'd be interested to hear how that performs.

        1 Reply Last reply Reply Quote 0
        • N
          NOYB
          last edited by

          @whosmatt:

          @jwt:

          As you can see in this screenshot, the second card is a Realtek 8168/8111. That's a bit of a bummer, as it will limit my performance at home to around 600Mbps, and I have a 1Gbps/1Gbps link.

          What about just using the Intel NIC with VLANs and skip the Realtek altogether, or use it for a less demanding subnet?  I'd be interested to hear how that performs.

          1gbps NIC with VLAN WAN/LAN = 500mbps max throughput does it not?  Thus unable to utilize the full 1gpbs WAN connection.  So the extra $40 gets another 100mbps.

          For less demanding subnet why shell out that much dough ($$$)?

          Dual NIC NUC is appealing but not at that price point for only 600mbps.  If it were full gig on both nics it would be much more appealing.

          1 Reply Last reply Reply Quote 0
          • W
            whosmatt
            last edited by

            @NOYB:

            1gbps NIC with VLAN WAN/LAN = 500mbps max throughput does it not?

            It would with only half-duplex.  Full duplex should allow for full throughput in both directions.

            1 Reply Last reply Reply Quote 0
            • M
              moscato359
              last edited by

              You still lose something.

              If you're bizarrely uploading and downloading >500mbit simultaneously, then you'd not get those speeds anymore.

              1 Reply Last reply Reply Quote 0
              • W
                whosmatt
                last edited by

                @moscato359:

                You still lose something.

                If you're bizarrely uploading and downloading >500mbit simultaneously, then you'd not get those speeds anymore.

                Fair enough.  Even better, bond the 2 NICs and then use that LAGG as a parent interface for the VLANs.

                1 Reply Last reply Reply Quote 0
                • S
                  Soyokaze
                  last edited by

                  @whosmatt:

                  @NOYB:

                  1gbps NIC with VLAN WAN/LAN = 500mbps max throughput does it not?

                  It would with only half-duplex.  Full duplex should allow for full throughput in both directions.

                  There is NO* such thing as half-duplex on 1Gbit.

                  • while standard "allows" such thing, you will never ever see it in live environment

                  Need full pfSense in a cloud? PM for details!

                  1 Reply Last reply Reply Quote 0
                  • O
                    Ofloo
                    last edited by

                    wrong topic

                    1 Reply Last reply Reply Quote 0
                    • T
                      tjoff
                      last edited by

                      Oh boy, this topic seems all too familiar. I don't want to hijack the thread but just add some notes about a similar setup that I've used for a couple of years. I'm relaying this through memory so some details (version numbers perhaps?) might be off.

                      I have the first i5 NUC that was released and I used the mPCIe-slot that you'd "normally" have the wifi-card. Instead I inserted a mPCIe -> PCI-E 1x adapter, and in that slot I inserted an old and basic intel gbit NIC.

                      Image of the setup (NSFW):
                      http://i.imgur.com/ufTCBBq.jpg

                      Now the old NUCs didn't have the SATA drive option so there was no power header available on the motherboard, so I'm powering the mPCIe adapter from another PC… (might have been possible to extract power from some other place within the NUC but I didn't research that)

                      The biggest reason for doing all this was because the goal was to use XenServer on the NUC and virtualize pfsense along with some other VMs. The main issue was that freebsd <10 didn't have proper support for the xenserver virtualized ethernet adapters and the performance as abysmal (about 1 % of CPU utilization per mBit (so I could barely max out my 100 mbit line and if I did there would be nothing left for other VMs)).

                      So I passed one of the NICs to pfsense (I believe the integrated one) and used the other to manage the xenserver (all of this hassle could have been avoided with using a simple usb-ethernet-adapter for xenserver management (which is not bandwidth-heavy and rarely used) - but there was a nasty bug that the usb-ethernet adapter would be renamed in xenserver and I'd loose access to the machine, also xenserver doesn't allow the management NIC to be a virtual adapter which could have been an alternative (but scary) solution).

                      With the advent of pfsense 2.2, which used freebsd 10, proper virtualized ethernet drivers for xenserver was available and some time after that I scrapped the setup above in favor for assigning pfsense VLAN adapters in xenserver instead - with the main advantage to avoid the crazy setup depicted above and not having to rely on another PC for powering one of the NICs...

                      I just saw this thread and became somewhat nostalgic and to let you know that the platform indeed is "suitable" for such hacks. Now I did the passthrough in xenserver/linux rather than in pfsense/freebsd but the hardware is/was at least capable of it! :)

                      1 Reply Last reply Reply Quote 0
                      • W
                        worthmining
                        last edited by

                        @tjoff:

                        Oh boy, this topic seems all too familiar. I don't want to hijack the thread but just add some notes about a similar setup that I've used for a couple of years. I'm relaying this through memory so some details (version numbers perhaps?) might be off.

                        I have the first i5 NUC that was released and I used the mPCIe-slot that you'd "normally" have the wifi-card. Instead I inserted a mPCIe -> PCI-E 1x adapter, and in that slot I inserted an old and basic intel gbit NIC.

                        Image of the setup (NSFW):
                        http://i.imgur.com/ufTCBBq.jpg

                        Now the old NUCs didn't have the SATA drive option so there was no power header available on the motherboard, so I'm powering the mPCIe adapter from another PC… (might have been possible to extract power from some other place within the NUC but I didn't research that)

                        The biggest reason for doing all this was because the goal was to use XenServer on the NUC and virtualize pfsense along with some other VMs. The main issue was that freebsd <10 didn't have proper support for the xenserver virtualized ethernet adapters and the performance as abysmal (about 1 % of CPU utilization per mBit (so I could barely max out my 100 mbit line and if I did there would be nothing left for other VMs)).

                        So I passed one of the NICs to pfsense (I believe the integrated one) and used the other to manage the xenserver (all of this hassle could have been avoided with using a simple usb-ethernet-adapter for xenserver management (which is not bandwidth-heavy and rarely used) - but there was a nasty bug that the usb-ethernet adapter would be renamed in xenserver and I'd loose access to the machine, also xenserver doesn't allow the management NIC to be a virtual adapter which could have been an alternative (but scary) solution).

                        With the advent of pfsense 2.2, which used freebsd 10, proper virtualized ethernet drivers for xenserver was available and some time after that I scrapped the setup above in favor for assigning pfsense VLAN adapters in xenserver instead - with the main advantage to avoid the crazy setup depicted above and not having to rely on another PC for powering one of the NICs...

                        I just saw this thread and became somewhat nostalgic and to let you know that the platform indeed is "suitable" for such hacks. Now I did the passthrough in xenserver/linux rather than in pfsense/freebsd but the hardware is/was at least capable of it! :)

                        Can you elaborate a bit on your single NIC setup?  I have an old NUC, impressed by your hack was thinking about the same thing.  Let me understanding you correctly, did you install xenserver on NUC bare metal, add a VM for pfsense.  With pfsense VLAN, it'll be able to use the same physical NIC on xenserver?  I suppose I'll need to setup the vlan from a switch, which connects to both the broadband modem and wireless router?

                        1 Reply Last reply Reply Quote 0
                        • S
                          Soyokaze
                          last edited by

                          If you will use an L2 VLAN capable switch, you can use single NIC for as many vNIC/VLANS as you wish, even with bare-metal pfsense.
                          It is just a quirk of Xen Server, which requires separate NIC for management.

                          Need full pfSense in a cloud? PM for details!

                          1 Reply Last reply Reply Quote 0
                          • ?
                            Guest
                            last edited by

                            Basically, a NUC without thunderbolt or a real 2nd Intel NIC is shit. You can't do what you want to do no matter what it is.

                            If you need less than 500Mbit: get a cheaper dual-NIC box. If you need more than 1Gbit: get a more expensive dual-NIC box. The NUC is perfectly in the middle: doesn't do faster things, and is expensive to do the slower things.

                            There used to be Thunderbolt versions that you can use with the cheap Apple 1Gbit Thunderbolt adapter, that worked great. (as long as you don't hot plug them) You could have a three-1Gbit NUC pushing line speeds just fine…

                            Also, Intel won't allow their chips in a mPCIe/M.2FF so you will probably always get shitty Realtek and comparable chips for those hacky solutions. I haven't found a real solution, and I imagine nobody is going to in the future.

                            1 Reply Last reply Reply Quote 0
                            • ?
                              Guest
                              last edited by

                              That's a bit of a bummer, as it will limit my performance at home to around 600Mbps, and I have a 1Gbps/1Gbps link.

                              Would you please so friendly and tell me what is the normal or ordinary WAN speed what you get normally together with your
                              SG-4860 pfSense unit? It would be not really pointing the theme here but it would be for my own interest to know it, thanks
                              for taking the time to answer.

                              1 Reply Last reply Reply Quote 0
                              • First post
                                Last post
                              Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.