Pfsense as internal router with ACL
nick76 last edited by
I'm studying a solution based on 2 HP servers DL380G7 (CPU X5650, 16G RAM) that will replace our current default gateway and will act as default gateway between VLANs (10-15VLANs) and allow only enabled traffic from one VLAN to another. There will be around 250 devices (PCs, Servers, Printers, …). There will not any other services besides carp and Active Directory integration. I will do this mainly to filter traffic from client to servers.
this could be a working solution? Or will be a bottleneck and will create frustration for the end user?
thank you very much
Guest last edited by
It will work fine.
jahonix last edited by
HP servers DL380G7 (CPU X5650, 16G RAM) … gateway between VLANs (10-15VLANs)
Basically this will be a great use for those devices.
Is the link speed of your switches currently 1Gbps or 10/40Gbps already? The 4 internal NICs are "only" 1Gbps so for 10Gbps you might want to install a beefier network card. But current hardware/software will max out well below 10Gbps anyways.
Third, the core of pfSense (pf, packet forwarding, shaping, link bonding/sharing, IPsec, etc) will be re-written using Intel’s DPDK.
DPDK is a set of libraries and drivers for fast packet processing. It was designed to run on any processors knowing Intel x86 has been the first CPU to be supported. Ports for other CPUs like IBM Power 8 are in-progress.
We have a goal of being able to forward, with packet filtering at rates of at least 14.88Mpps. This is “line rate” on a 10Gbps interface. There is simply no way to use today’s FreeBSD (or linux) in-kernel stacks for this type of load. Since this work is only available on certain, select Ethernet cards (mostly 1Gbps/10Gbps/40Gbps Intel interfaces as well as various VMware and Xeon ‘virtualization’ NICs. Other vendors, including Broadcom, Myrianet, Chelsio and Cisco have shown interest. This also means that the underlying kernel and system will be 64-bit only.
Spread the VLANs across your NICs wisely to not create bottlenecks.
Normally this would just be done with a L3 switch.. Guessing you have some sort of budget constraints? And you happen to have these servers laying around?
But jahonix is right on the money with suggestion to make sure you place your vlans correctly on the nics or you could run into some bottlenecks their depending on which vlans see the most traffic, etc.