High CPU usage with Interface Bridges
Is it normal for CPU usage to increase when using multiple NICs in Bridge config?
My CPU usage was down around 5-8% (with PowerD clocking down from 3.2ghz to 1.192ghz). When I enabled the first interface on the bridge my CPU usage went from 5-8% up to 29% (powerd clocking at 2393mhz) so that's quite a big difference. If I start using the whole bridge the CPU usage jumps up to about 90%… fans cranked up to full blast, etc. Very annoying but I'm more concerned with other issues such as heat.
EDIT: Noticed that when the bridged interfaces have been idle that CPU goes back down to 7-11% and powerd clocks down to around 1.2-1.5ghz however the second I have active connections on the bridge interfaces CPU usage spikes back up. 5 interfaces bridged together range from 11% CPU usage all the way up to 96% CPU usage. :(
Really would like some help with this - at this rate running switches will be cheaper than running a box capable of this config. Thx!
System is a Dell Precision 490 Workstation with dual Xeon x5063 (the low energy cpus that match the x5060 3.2ghz CPU).
2x dual-core CPUs with HT @ 3.2ghz, 8 threads
8GB ddr2 PC2-5300 RAM (4x 2GB sticks)
gmirror on 2x 36gb WD Raptor hdd
1x Intel MT 1000 Quad NIC (LAN3 & WAN3 on this)
1x Intel PT 1000 Dual NIC (LAN1 & WAN1 on this)
1x Intel PT 1000 Quad NIC (WAN2 and LAN2 + Bridged LAN1 on this)
I have space to add more NICs, etc. so is my HW configuration wrong? Should I put all WANs on one NIC and LANs on another? Should I avoid LAN & WAN on same NIC?
Would it be better to run 3 dual-interface cards (like 3x Intel Pro 1000 PT dual cards) and run LAN + WAN on each of those and push to a switch)?
This is my first attempt using this hardware config. Usually I'd run Xeon x5060 with 8x 1GB RAM and keep the video card in the machine and run 2x Intel Pro 1000 MT PCIX dual-cards with 2x Intel Pro 1000 PT PCIe dual-cards and just run each WAN & LAN line each on one card and just push it out to 3-4 switches. I'm trying to cut power consumption and noise down so I figured that instead of running switches + the box I could run PT & MT quad-cards and just bridge a few interfaces together to eliminate the need for all the switches (or one bigger switch with VLANs).
I am going to try this same config (or similar) on a much newer box with 2x Intel W5580 CPUs and 12GB ddr3 RAM. I have a feeling that the power consumption will be similar though.
Any thoughts or best-practices would be much appreciated.
Also noticing intermittent loss of connectivity on this config - on System >> Advanced >> Firewall / NAT tab I changed the "Firewall Optimization Options" from "normal" to "conservative" and it got a bit better but still dropping connections. This is a major issue - not acceptable and never experienced this previously.
One last related question - sorry for the # of replies, i thought it might be easier for people wanting to quote/reply to one topic vs. all questions.
Regarding best practices, if I have between 3 & 4 WANs and LANs per box, would it be better to combine WANs to one card and LANs to another, put each paired LAN+WAN on an individual card, combine multiple LAN+WAN on one card?
For example, which is better or considered best practice:
4x Intel Pro 1000 dual-NIC
Choice 1: put each respective LAN+WAN (1-4) on each of the 4 dual-cards so each card has its own WAN + LAN
Choice 2: put WANs 1-4 on 2 cards and LANs 1-4 on two cards isolating LAN + WAN from the same card
If I were to run 2x quad-cards would it be better to run WANs on one and LANs on the other or put LAN1 + WAN1 and LAN2 + WAN2 on one card and LAN3 + WAN3 and LAN4 + WAN4 on another card?
Trying to get a grasp of how to config for lowest power/energy consumption. Also for scalability purposes trying to get a grasp of what is best in terms of system resource usage, etc.
All of the WANs are fairly low bandwidth (below 35mbps) so I'm not concerned (yet) with having multiple higher-bandwidth connections pushing through the same box - at the point where I'm trying to push multiple 100meg WANs lines through the same box I think I'd need to upgrade hardware.
Thx again for looking and/or commenting!
When you have two interfaces bridged all the traffic from each interface has to be read in and written out by the system. This includes all the traffic that doesn't have to be sent to the other half of the network. Bridging interfaces is not an efficient way to connect two network segments.
This is made a lot less efficient by the fact that pfSense filters all traffic through a bridge by default. Generally bridges are only used for creating a transparent firewall. It is possible to turn off bridge filtering though, this may reduce CPU usage considerably.
It shouldn't make much difference which physical NICs you use for each interface. The only restrictions are that server grade Intel NICs are generally better at offloading CPU usage and that each NIC has a limited bandwidth connection to the system (PCI, PCI-X, PCIe X4 etc). So if you need to use all four Gig Interfaces on a four way card simultaneously is won't squeeze down a standard PCI connection.
wallabybob last edited by
5 interfaces bridged together range from 11% CPU usage all the way up to 96% CPU usage. :(
At first sight, bridging 5 interfaces doesn't look like a good thing to do if you want to keep CPU usage low. Its not a good thing especially if you have lots of broadcast or multicast traffic (received broadcast and multicast traffic needs to be sent to all other bridge interfaces).
If you want to keep pfSense CPU usage low you should offload switching and bridging to external boxes. External switches and bridges are generally able to do switching and bridging rather more efficiently than a pfSense box because they generally use custom designed hardware rather than a general purpose CPU.
I have re-read your posts a couple of times and its not clear to me what interfaces you have bridged and why.
It was a semi-foolish attempt to consolidate devices. It started as an effort to reduce power consumption since this location has poor electrical. My thought was that if I could run it all on one box then we could save the power of the switches running 24/7. This also gave me hopes of being able to get rid of a few very old switches.
What ended up happening was the Dell box ran at such high CPU usage it far exceeded the power consumption and noise level of the old setup.
Time to look for a more efficient pfSense box and time to upgrade 3x 8-port switches to one 16-port managed switch.
That's definitely the way to go.
I'd be interested in what difference it made switching the filtering though.
I believe you can do it easily in System:Advanced:System Tunables.
You'll see that by default pfSense filters traffic on the bridge member interfaces and not the bridge itself. Just swap that around so it's only filtering traffic entering or leaving the bridge and not every packet on the member interfaces.
I have 10 interfaces on my box and have often considered bridging them but haven't because of the problems you ran into. However I don't usually have sufficient traffic to make a useful judgement.
I tried that and it was much better but not enough to make a difference as compared to not in bridge mode.
I did notice a difference between a more modern platform and an older one - a huge difference. When running what I described in my first post (Dell Precision 490 with 2x Xeon x5060 or x5063 CPUs, DDR2 RAM and Intel Pro 1000 MT quad NICs) vs. a newer Dell T5500 with a single xeon w5580 CPU (same 4 physical cores + 4HT) and ddr3 RAM as well as PCIe Intel Pro 1000 PT quad NICs there was a noticable difference in the two (which seems obvious and logical since the W5580 CPU is worth more than the Dell P490 box all together).
There was still high CPU usage on both systems - not enough to make me want to go back to that bridge config.
I'm going to look for something that's passively cooled (like an Atom box) or something similar that I can use with some decent switches.
I really like these platforms I'm using but I'm quickly finding that the power consumption is making the hardware more expensive over the long run.