Looking for help understanding NIC bottle neck across subnets
-
I am looking to modify my existing PfSense setup but I am having some trouble understanding the communication across subnets. I am looking to use a single intel NIC quad port for multiple subnets, 1 for all wired machines in the house, 1 subnet for the NAS and Plex server, the other 2 for unrelated functions. Will I encounter any bottleneck of content delivery having the machines accessing the content from the Plex server and the Plex server itself on separate subnets? I am looking to be able to run 4-5 streams simultaneously from the Plex server. I prefer to keep it and the NAS on their own subnet so I can utilize the firewall to carefully manage what does and does not have access to the NAS.
Thank you for your help!
-
Streaming some video from a NAS doesnt really take that much bandwitdh. Ofc this depends on the quality of the video.
A Quadport gigabit adapter wont be a bottleneck for sure.
-
What hardware is the pfsense actually on? Mine is running on esxi, n40l as vm there are physical gig nics for each segment but have a hard time pushing full gig between segments. That being said not really all that much bandwidth for a few video streams.
What is the specific nic your looking to get?
-
I want to plan on 4-5 streams of 1080p+ just for the sake of future proofing. If the media streams won't result in a bottle neck then I think I should be ok. All real sizable file transfers will be handled within each subnet, any across subnets will be intermittent and not more than 1 at any single time.
Motherboard:
BIOSTAR A68N-5000 AMD A4-5000 Quad-Core APURam:
8gigs CorsairLooking to drop in the quad port intel NIC:
Intel E1G44ET2 -
Had another thought on this issue, how would the maximum transfer rate between subnet be defined? What I mean to say is transfer between subnets, would that be limited by the speed of the PCIe port? By the card itself? By the processing power? I wonder if two sets of teamed users, the NAS on one subnet, the user on the other would have the benefits of the teaming negated by the transfer between subnets?
-
Just a bit of simple math. Bluray is about 40Mb on the high end, if you want 5 devices streaming 1080p Bluray, then you'll need about 200Mb/s. If your videos are less than Bluray, then less bandwidth is required. If you're re-encoding videos, then just take your bitrate and multiply it against the number of devices you want streaming at the same time.
-
I appreciate your reply Harvy66, I will be sure to keep this in mind when modifying my setup. My question of math is more about the maximum transfer speed between 2 subnets that exist in the same or separate NIC's and what the limiting factor is therein. Thanks for the help!
-
Assuming you have gigabit NICs and a PCIe bus, the only limit is how much processing power pfSense has available. The C2758 will easily push 1000mbps of firewall and NAT throughput. Extra packages like Snort can drastically change the amount of CPU power needed to achieve gigabit speeds.
The main limiting factor in throughput is usually just raw CPU power. If you want to get really crazy you can install 10gbps adapters in a box along with high end Xeon CPUs and go nuts.
-
I'm thinking the limitation like antillie has stated in the CPU, How much time does it take Pfsense to rewrite the source and destination layer 2 information as it goes from one interface to the other? And then check the access list to see if the transaction is allowed. I will do some test on my network tomorrow to see and I can push the network to the full gig. My bet is the limiting factor will be the limitation of the spinning disk in my NAS but we will see. I'm thinking in a home environment this will be no problem but in a production environment if you are looking for absolute performance then a layer 3 switch will be optimal but you won't get the benefit of PfSense's filtering.
-
for unrelated functions.
If you are using a network switch between them it would try to set up LAGs (LACP)
so you will be able to set up 2 GBi/s aggregated throughput.