PfSense on a 2 NIC NUC
-
@jwt:
As you can see in this screenshot, the second card is a Realtek 8168/8111. That's a bit of a bummer, as it will limit my performance at home to around 600Mbps, and I have a 1Gbps/1Gbps link.
What about just using the Intel NIC with VLANs and skip the Realtek altogether, or use it for a less demanding subnet? I'd be interested to hear how that performs.
1gbps NIC with VLAN WAN/LAN = 500mbps max throughput does it not? Thus unable to utilize the full 1gpbs WAN connection. So the extra $40 gets another 100mbps.
For less demanding subnet why shell out that much dough ($$$)?
Dual NIC NUC is appealing but not at that price point for only 600mbps. If it were full gig on both nics it would be much more appealing.
-
1gbps NIC with VLAN WAN/LAN = 500mbps max throughput does it not?
It would with only half-duplex. Full duplex should allow for full throughput in both directions.
-
You still lose something.
If you're bizarrely uploading and downloading >500mbit simultaneously, then you'd not get those speeds anymore.
-
You still lose something.
If you're bizarrely uploading and downloading >500mbit simultaneously, then you'd not get those speeds anymore.
Fair enough. Even better, bond the 2 NICs and then use that LAGG as a parent interface for the VLANs.
-
1gbps NIC with VLAN WAN/LAN = 500mbps max throughput does it not?
It would with only half-duplex. Full duplex should allow for full throughput in both directions.
There is NO* such thing as half-duplex on 1Gbit.
- while standard "allows" such thing, you will never ever see it in live environment
-
wrong topic
-
Oh boy, this topic seems all too familiar. I don't want to hijack the thread but just add some notes about a similar setup that I've used for a couple of years. I'm relaying this through memory so some details (version numbers perhaps?) might be off.
I have the first i5 NUC that was released and I used the mPCIe-slot that you'd "normally" have the wifi-card. Instead I inserted a mPCIe -> PCI-E 1x adapter, and in that slot I inserted an old and basic intel gbit NIC.
Image of the setup (NSFW):
http://i.imgur.com/ufTCBBq.jpgNow the old NUCs didn't have the SATA drive option so there was no power header available on the motherboard, so I'm powering the mPCIe adapter from another PC… (might have been possible to extract power from some other place within the NUC but I didn't research that)
The biggest reason for doing all this was because the goal was to use XenServer on the NUC and virtualize pfsense along with some other VMs. The main issue was that freebsd <10 didn't have proper support for the xenserver virtualized ethernet adapters and the performance as abysmal (about 1 % of CPU utilization per mBit (so I could barely max out my 100 mbit line and if I did there would be nothing left for other VMs)).
So I passed one of the NICs to pfsense (I believe the integrated one) and used the other to manage the xenserver (all of this hassle could have been avoided with using a simple usb-ethernet-adapter for xenserver management (which is not bandwidth-heavy and rarely used) - but there was a nasty bug that the usb-ethernet adapter would be renamed in xenserver and I'd loose access to the machine, also xenserver doesn't allow the management NIC to be a virtual adapter which could have been an alternative (but scary) solution).
With the advent of pfsense 2.2, which used freebsd 10, proper virtualized ethernet drivers for xenserver was available and some time after that I scrapped the setup above in favor for assigning pfsense VLAN adapters in xenserver instead - with the main advantage to avoid the crazy setup depicted above and not having to rely on another PC for powering one of the NICs...
I just saw this thread and became somewhat nostalgic and to let you know that the platform indeed is "suitable" for such hacks. Now I did the passthrough in xenserver/linux rather than in pfsense/freebsd but the hardware is/was at least capable of it! :)
-
Oh boy, this topic seems all too familiar. I don't want to hijack the thread but just add some notes about a similar setup that I've used for a couple of years. I'm relaying this through memory so some details (version numbers perhaps?) might be off.
I have the first i5 NUC that was released and I used the mPCIe-slot that you'd "normally" have the wifi-card. Instead I inserted a mPCIe -> PCI-E 1x adapter, and in that slot I inserted an old and basic intel gbit NIC.
Image of the setup (NSFW):
http://i.imgur.com/ufTCBBq.jpgNow the old NUCs didn't have the SATA drive option so there was no power header available on the motherboard, so I'm powering the mPCIe adapter from another PC… (might have been possible to extract power from some other place within the NUC but I didn't research that)
The biggest reason for doing all this was because the goal was to use XenServer on the NUC and virtualize pfsense along with some other VMs. The main issue was that freebsd <10 didn't have proper support for the xenserver virtualized ethernet adapters and the performance as abysmal (about 1 % of CPU utilization per mBit (so I could barely max out my 100 mbit line and if I did there would be nothing left for other VMs)).
So I passed one of the NICs to pfsense (I believe the integrated one) and used the other to manage the xenserver (all of this hassle could have been avoided with using a simple usb-ethernet-adapter for xenserver management (which is not bandwidth-heavy and rarely used) - but there was a nasty bug that the usb-ethernet adapter would be renamed in xenserver and I'd loose access to the machine, also xenserver doesn't allow the management NIC to be a virtual adapter which could have been an alternative (but scary) solution).
With the advent of pfsense 2.2, which used freebsd 10, proper virtualized ethernet drivers for xenserver was available and some time after that I scrapped the setup above in favor for assigning pfsense VLAN adapters in xenserver instead - with the main advantage to avoid the crazy setup depicted above and not having to rely on another PC for powering one of the NICs...
I just saw this thread and became somewhat nostalgic and to let you know that the platform indeed is "suitable" for such hacks. Now I did the passthrough in xenserver/linux rather than in pfsense/freebsd but the hardware is/was at least capable of it! :)
Can you elaborate a bit on your single NIC setup? I have an old NUC, impressed by your hack was thinking about the same thing. Let me understanding you correctly, did you install xenserver on NUC bare metal, add a VM for pfsense. With pfsense VLAN, it'll be able to use the same physical NIC on xenserver? I suppose I'll need to setup the vlan from a switch, which connects to both the broadband modem and wireless router?
-
If you will use an L2 VLAN capable switch, you can use single NIC for as many vNIC/VLANS as you wish, even with bare-metal pfsense.
It is just a quirk of Xen Server, which requires separate NIC for management. -
Basically, a NUC without thunderbolt or a real 2nd Intel NIC is shit. You can't do what you want to do no matter what it is.
If you need less than 500Mbit: get a cheaper dual-NIC box. If you need more than 1Gbit: get a more expensive dual-NIC box. The NUC is perfectly in the middle: doesn't do faster things, and is expensive to do the slower things.
There used to be Thunderbolt versions that you can use with the cheap Apple 1Gbit Thunderbolt adapter, that worked great. (as long as you don't hot plug them) You could have a three-1Gbit NUC pushing line speeds just fine…
Also, Intel won't allow their chips in a mPCIe/M.2FF so you will probably always get shitty Realtek and comparable chips for those hacky solutions. I haven't found a real solution, and I imagine nobody is going to in the future.
-
That's a bit of a bummer, as it will limit my performance at home to around 600Mbps, and I have a 1Gbps/1Gbps link.
Would you please so friendly and tell me what is the normal or ordinary WAN speed what you get normally together with your
SG-4860 pfSense unit? It would be not really pointing the theme here but it would be for my own interest to know it, thanks
for taking the time to answer.