PFsense on a Poweredge 1850
-
Well it looks I found the hardware limits of the new server as well. We were able to push about 500Mbps and 80,000 PPS with no issue. Once we get to the 600Mbps and 100,000 PPS we get input errors (NIC buffer overruns). While doing some realtime troubleshooting, I noticed that the errors occur exactly when the one of 4 CPU's hits 100% .(kernel em0 queue) process. em0 is my otuside interfaces. So it appears my earlier suspicion applies in this case and the CPU is too busy to pull the packets off the NIC buffer in time and I end up with overruns. The CPU I'm using is a Intel(R) Xeon(R) CPU 5130 @ 2.00GHz so it looks like I'm going to be searching for another box. I'm doing 1to1 NAT on over 5,000 hosts so I think that might be driving the CPU higher than I expected. The attached pic shows CPU1 at 84% but "top -P" shows that it gets to 100% when the packet loss occurs.
I'd love to put the Ubiquiti Edgerouter inline and test their PPS claim here since I'm way under 1,000,000 PPS :P (j/k)
Out of curiosity, does anyone know why the RRD graphs don't show individual CPU/core stats? The CPU data there looks like its the average of all 4 CPU's which doesn't real help in troubleshooting a problem like this. I did an snmpwalk and found utilization data for all the CPU's so I'm graphing it separately in cacti now. (HOST-RESOURCES-MIB::hrProcessorLoad.x)
Some data from my troubleshooting is below in case some spots something . I have a lot of experience troubleshooting networks in general but I'm very new to BSD so I could be missing something.
input (Total) output
packets errs idrops bytes packets errs bytes colls
86k 83 0 73M 87k 0 73M 0
100k 155 0 85M 101k 0 85M 0
96k 0 0 82M 97k 0 82M 0
99k 74 0 82M 101k 0 82M 0
96k 0 0 82M 98k 0 82M 0dev.em.0.mac_stats.missed_packets: 2294752
dev.em.0.mac_stats.recv_no_buff: 4617837
dev.em.0.mac_stats.recv_undersize: 0
dev.em.0.mac_stats.recv_fragmented: 0
dev.em.0.mac_stats.recv_oversize: 0
dev.em.0.mac_stats.recv_jabber: 0
dev.em.0.mac_stats.recv_errs: 0
dev.em.0.mac_stats.crc_errs: 0
dev.em.0.mac_stats.alignment_errs: 0
dev.em.0.mac_stats.coll_ext_errs: 0
dev.em.0.mac_stats.xon_recvd: 9112
dev.em.0.mac_stats.xon_txd: 120
dev.em.0.mac_stats.xoff_recvd: 9112
dev.em.0.mac_stats.xoff_txd: 120
dev.em.0.mac_stats.total_pkts_recvd: 10671726540
dev.em.0.mac_stats.good_pkts_recvd: 10669413564
dev.em.0.mac_stats.bcast_pkts_recvd: 15097
dev.em.0.mac_stats.mcast_pkts_recvd: 9664
dev.em.0.mac_stats.rx_frames_64: 240300603
dev.em.0.mac_stats.rx_frames_65_127: 744037531
dev.em.0.mac_stats.rx_frames_128_255: 281908686
dev.em.0.mac_stats.rx_frames_256_511: 135974542
dev.em.0.mac_stats.rx_frames_512_1023: 172724810
dev.em.0.mac_stats.rx_frames_1024_1522: 9094467392
dev.em.0.mac_stats.good_octets_recvd: 13931850472813
dev.em.0.mac_stats.good_octets_txd: 1173620928614
dev.em.0.mac_stats.total_pkts_txd: 5912173538
dev.em.0.mac_stats.good_pkts_txd: 5912173297
dev.em.0.mac_stats.bcast_pkts_txd: 2117
dev.em.0.mac_stats.mcast_pkts_txd: 2: vmstat -i
interrupt total rate
irq14: ata0 376 0
irq20: uhci1 437491 0
irq21: uhci0 uhci2+ 541201 0
cpu0: timer 1165155769 1997
irq256: bce0 23965829 41
irq257: mfi0 1297902 2
irq258: em0 2536851814 4350
irq259: em1 2695135942 4621
cpu2: timer 1165155721 1997
cpu3: timer 1165155724 1997
cpu1: timer 1165155721 1997
Total 9918853490 17008
-
I don't really have experience at this sort of traffic level but it seems like you should be able to do better than that on those servers. That's just a general impression though. It would be useful to get an opinion from someone more experienced.
Could this be a situation where IP fastforwarding could be usefully enabled? It can cause problems, notably with IPSec.
https://forum.pfsense.org/index.php?topic=57723.0What hardware offloading options do you have enabled?
Steve
-
I don't really have experience at this sort of traffic level but it seems like you should be able to do better than that on those servers. That's just a general impression though. It would be useful to get an opinion from someone more experienced.
Could this be a situation where IP fastforwarding could be usefully enabled? It can cause problems, notably with IPSec.
https://forum.pfsense.org/index.php?topic=57723.0Steve
I thought it could do better too but the numbers say otherwise. I have a simple ruleset of about 5 rules on each interface. I have not loaded any packages. No VPN. I do log everything to syslog but that is a requirement that I can't get away from.
Hmm, interesting option. We will not be using IPSec terminated directly on this box so that's not an issue. However ,students do use VPN clients which will go through the firewall. I have to research it more to see if anything else might break by applying it. With over 3,000 users with every device you can imagine a student might bring into a dorm room, I'm apprehensive on what it might break.
-
Hmm, I imagine it would break IPSec through the box and probably generate some complaints! It can dramatically increase throughput in some instances though. There may other opportunities for tuning though.
Earlier I said that the ERL had an ASIC to increase throughput but I think that was wrong (I can't edit it now). It looks like it has a closed source IP forwarding module that can run separately on one of it's 8 cores. No chance of a FreeBSD driver but maybe an equivalent in the future.
Steve
-
The results are somewhat expected. currently pfSense is using an old pf that is single core only. The only real reason to run pfsense on a multicore is for the addons to use the other cores while pf filtering is stuck on one.
The faster the clock speed of a single core, the more throughput you will observe. The pfSense hardware sizing have 2GHz machines topping out at around 500Mbps. You got it to go a bit higher. I would imagine that you could get a lot more if you have a 3.6GHz or an over clocked machine at 4Ghz.
There has been talk about upgrading to the newer pf, but I don't know much about it or even when. Perhaps 2.2 or 2.3. It should have multicore if based on the newer code. (Note, I am not with ESF and I don't know the plans, at all.) Just hoping that we can get to multicore/multithreaded before I need it. -
The results are somewhat expected. currently pfSense is using an old pf that is single core only. The only real reason to run pfsense on a multicore is for the addons to use the other cores while pf filtering is stuck on one.
The faster the clock speed of a single core, the more throughput you will observe. The pfSense hardware sizing have 2GHz machines topping out at around 500Mbps. You got it to go a bit higher. I would imagine that you could get a lot more if you have a 3.6GHz or an over clocked machine at 4Ghz.
There has been talk about upgrading to the newer pf, but I don't know much about it or even when. Perhaps 2.2 or 2.3. It should have multicore if based on the newer code. (Note, I am not with ESF and I don't know the plans, at all.) Just hoping that we can get to multicore/multithreaded before I need it.I looked at CPU requirements and saw a 3 Ghz was recommended but it doesn't mention anything about the CPU architecture. The Dell 1850 in the beginning of this thread was a 3 Ghz Xeon but an older architecture (800 FSB). My current 2 Ghz (1333 FSB) is pushing twice the traffic so it gets kind of tricky comparing the older CPU's with the newer models.
Do you know what name of the actual PF process is so I could monitor it? I see that the kernel process is the one taking up all the CPU and it is across 2 cores (cpu1 em0, cpu2 em1 in my last screenshot). Is that actual OS pulling packets off the NIC before packet filtering process? I'm used to the Cisco ASAs where I would look at the dispatcher process for filtering CPU usage. Not sure what the equivalent is here.
Lastly, do you know what the "top" command equivalent to Diagnostics–>System activity is? The close I got to it was "top -P" but didn't show me as much detail as the System Activity menu.
Thanks for you patience with my newb questions.
-
I agree it doesn't mention that, but if you went with a 1950 with faster proc, you might do well.
Not sure about the top command, but you can do a ps -ef while that is running and it would probably tell you. -
top -SH
The hardware guide is little outdated as you've found.
Steve
-
In the little bit of reading I've done its basically about how many interrupts a second the core talking to that device can do, so clockspeed is judge, jury and executioner.
(and since newer architectures have improved IPC over time I would think that might include interrupts as well but not sure?)The HFT guys apparently have the same problems that busy networks do, but makes sense as both are doing tons of small random I/O.
From what I understand if even a 4.x Ghz core cannot do your workload and you can't spread it to other cores, the next step is to offload it to specialty hardware. Definitely explains some of those odd dual core high clocked xeon models out there.
-
There has been talk about upgrading to the newer pf, but I don't know much about it or even when. Perhaps 2.2 or 2.3.
I missed this earlier. I'm not associated with ESF either.
The smp friendly pf is in FreeBSD 10 so pfSense 2.2, which will be built on that, should inlude it.http://svnweb.freebsd.org/base?view=revision&revision=240233
Steve