Switch or bridge for better performance?
-
I'm running 2.4 on a Watchguard XTM 5. Currently I'm only using two of the XTM's six gigabit ports, one each for LAN and WAN. The WAN goes to a Comcast gateway with 150Mbps/20Mbps service. The LAN port goes to an unmanaged gigabit switch, to which the rest of the LAN devices connect.
Would there be any noticeable performance advantage to bridging the XTM's remaining ports into a 5-port LAN, and moving the most internet-active connections to those ports? Or would it be irrelevant because the WAN speed is the bottleneck?
-
Its pointless and what is would do is slow down connections between those devices..
-
Its pointless and what is would do is slow down connections between those devices..
Kind of what I expected. Thanks.
-
Short answer: The switch is better.
Long answer: It depends :P I actually tested the bridged performance of my Intel i211 NICs and it was much better than I expected. With pfSense I could push around 870 MBit/s over the bridge, while the same devices would achieve 960MBit/s when connected to a switch. So, a switch is preferable in general. But let's say you ran out of ports on your switch and want to connect just one or two more devices without buying a new or larger switch, then you could use the spare ports on your pfSense box and bridge them - given that the slightly decreased throughput doesn't hurt you.
Btw. I did the same experiment with Linux on the same hardware. On Linux, the bridge performed just as good as the switch. The difference in maximum throughput between the bridge and the switch was just 1-2MBit/s which I consider within the range of measurement uncertainty. So, I guess it does depend on your NICs and how the drivers are implemented or optimized. -
I would expect the bridge to be able to pass at line rate given sufficient processing power. There will be significant latency compared to a switch though.
Even if you can bridge at line rate that will be using CPU cycles that could be doing something more useful.
The only real reason to bridge interfaces is to filter between network segments in the same subnet.
That said I have bridged NICs before when I had spare ports (and CPU cycles) just to give additional access. Real switch ports are pretty much always better.
Steve
-
Stephenw10 brings up a great point about the latency.. So while line speed might be very close.. What was the added latency?
Ok in a pinch oh I need a port and and I need it on this L2 and all my switch ports are full - ok.. sure.. But its not the correct use of said nic.. If you need switch port, use a switch.
-
I didn't test latency thoroughly, but from what I could see, I didn't see any change that couldn't easily have been measurement uncertainty. But I recall an interesting old paper on the performance of a Linux based bridge compared to a commercial switch that also concluded that the bridge performs quite good in terms of both latency and throughput unless the CPU load on the system is high. They measured latency differences in microseconds, while I only have looked at milliseconds. In general, they found that only for small frame sizes there was a slightly higher latency while for larger frame sizes it was quite the same. It's this paper: http://facweb.cti.depaul.edu/jyu/Publications/Yu-Linux-TSM2004.pdf
Now, this relates to Linux and that's different from BSD, of course. But I'd be suprised if FreeBSD would perform so much worse than Linux in this regard.
That being said, I also ran this setup only for a couple of days as a temporary solution (which I don't need anymore), so I can't speak from long-term or profound experience.
-
Any 'software' bridge will be slower than an ASIC switch bridge. This is because doing it in software requires a lot of CPU cycles to get a packet from one interface to another. An ASIC just ends up using a LUT and in-hardware direct packet transfers. Where an ASIC might use 1 or 2 cycles to forward a packet from one interface to another, a software bridge on a CPU might need 2000 cycles. Initially, this is compensated a bit by the CPU running at much faster speeds, but it scales very badly. One interface to one other interface might work, but as soon as you add one or two more, or go to higher speeds it totally fails.
-
The last thing u said, WAN is the bottle neck.
Extra LAN ports on a firewall is really intended if you have multiple LAN segments (subnets) and the FW box can be configured as a router to route those subnets.
Plus ask yourself, do you want to ship gigabits IntraLAN traffic through the FW, with its limited resources? or give it to a dedicated box like a switch?