Pfsense on 10G WAN with Suricata
Hi. Due to a fiber program in the where I am building a new house, I will have low cost 10 Gbps Internet access available in a few months. Right now I run Pfsense and Suricata (on the WAN side only) as a guest VM on a 2 CPU E5-2660 host with 128GB of RAM.
I'd like to also be able to run Suricata on at least some of my VLANs (adult devices and kid's network) in watch mode so I can map various security events back to the actual device, which I can't seem to do with Suricata running only in WAN mode with blocking.
Do you think my hardware can handle a 10 Gbps connection in this configuration, or should I do a new build and if so what hardware would be appropriate for that?
jahonix last edited by
Your hardware might be fine, you just have to wait for the software side to catch up…
As described in the previous sections, the design of operating systems is limiting the packet processing performance. Tests performed in our lab have shown that most state of the art packet processing frameworks … can very seldom process more that 50% of the capacity of a 10 Gbit/s ethernet link when using minimal size packets.
You won't get 10Gbps throughput from a software router on x64 hardware until VPP and DPDK start processing packets, planned to be introduced in pfSense 3.0 or so.
And the overhead introduced by your hypervisor further decreases your max throughput.
pfSense is the best way I knew to run with router function and Suricata IPS in a same system and same box.
If other one want to run Suricata and router in separated boxes, maybe a mikrotik router with 10gbe port can get benefit from its switch chips, like CCR1016-12S-1S+. But this is a linux solution with mikrotik(based on linux) and suricata box(based on linux, too).
They may not get 14.8Mpps 10Gb line rate 64byte packets, but I would think they could get near or at 10Gb/s of throughput for average packet sizes.
Netflix can get about 50Gb/s out of stock FreeBSD+nginx for hosting their services. Mostly limited to memory bandwidth. Netflix has made special tweaks to get up to ~90Gb/s, mostly by reducing the copying of data, allowing for more efficient use of memory bandwidth.
My 3.2ghz i5 Haswell quad is forwarding 1.4Mpps with NAT+HFSC+Codel @ 17% cpu, less than 1 core. Measured with iperf from my windows box with Intel i210 sending to public internet target iperf UDP 1Gb server. pfSense was showing about 1.45Mpps hitting the LAN and about 1.45Mpps leaving the WAN with HFSC configured to 1Gb/s. I assume the pps was limited by my Windows box being around 70%-80% cpu while attempting to send that many packets. Top showed a load of about 0.64, which is only about 17% of the 4.0 max load for my quad core. Unfortunately I was monitoring this via the web UI instead of SSH, so some amount of that load is the inefficient web UI. My provisioned rate is only 150Mb/s, so iperf was reporting ~85% loss, but I mostly cared about the reporting pps for the interfaces, not iperf stats.
Suricata is going to ruin most of this, but I assume memory bandwidth is going to be a big factor. You're going to want lots of bandwidth(quad channel), lowest latency, and probably 8 cores and plenty of L3 cache. <– my layman's opinion