Bridged interfaces performance
I am in a process of upgrading my local home network to 10gbe for the backbone to the fileserver. I am running a pfSense instance on ESXi box with direct path IO and at least 2 Xeon E1270 cores reserved to it.
Currently I've got 1x10gbe card and 1x4 port intel-based 1gbe card assigned to pfSense. The 10GBe link goes to a fileserver and I am planning to connect clients to the other card.
There is no routing/filtering needed on this part of the connection, I literally want to use it as close-to 10gbe switch, delivering 1gbe to each of the clients.
I am happy to sacrifice performance and I understand that this can affect latency but is there any other reason to not do this? Or would I be better off by letting ESXi handle the interface bonding into a vSwitch?
I am mainly doing large file transfers over the links so I am not too bothered by latency overheads as long as the throughput works.
I would go more with a 10 GBit/s capable Layer3 Switch that is handling those traffic.
Low price switches likes the Cisco SG500x or D-Link DGS1510 series are capable to
handle this traffic well for you and it must not be pulled all packets through the
firewall! So please thing about this and decide by your own what to do.
One SFP+ or 10GbE Port from the Switch to the NAS or Fileserver would be enough in
normal to handle much more tasks from many clients in the network.
don't bridge if you just want a switch. performance is horrible
performance is horrible
This is quite right but on top of this (bridging) mostly some other things comes beside likes;
- packet loss
- packet drop
- port flapping
There is a golden rules that says Route if oyu can and bridge only if you must.