Quick help, min. hardware req. for 6x 1gbps nic
-
Hi guys, im searching and i dont know did i get this right
http://www.pfsense.org/index.php?option=com_content&task=view&id=52&Itemid=49so for 6x 1gbps nics in pfs machine i need 3ghz+ cpu? and 512mb+ ram ?
thanks
-
What do you need 6 gbps for?
Do you excpect actual throughput of 6gbps?
Or just peak traffic? How many connections?You need more RAM for more connections (about 1kb/state).
You need more CPU-power for more throughput.
Make sure you are not using PCI NICs since the PCI bus caps out at a bit more than 1 gbps.
PCIe works well. -
Make sure you are not using PCI NICs since the PCI bus caps out at a bit more than 1 gbps.
PCIe works well.Thats a good point. What motherboard and card combination is going to give you 12Gbps (6 NICs x 1Gbps in + 6 NICs x 1Gbps out) sustained throughput? I expect its possible but its likely to be pricey.
-
Well, pretty much any board should be able to support a quad NIC and a dual NIC, at least many server-class boards have the necessary PCIe configuration (and may include the dual to begin with), but getting that amount of forwarding performance may be difficult. I don't have enough experience with high-load pfSense to say, but it seems like those numbers would be difficult to achieve, even with a relatively large average packet size.
-
Start with a Server based board that has two Intel NIC's and go from there. Something from Supermicro would be good. Then buy a couple dual gigabit NIC's (Intel as well) to fill in the PCIe or PCI-X slots.
This is a good choice, especially since PCI-X NIC's can be had for cheap on eBay.
http://www.supermicro.com/products/motherboard/Xeon3000/3210/X7SBE.cfmThis might be even better, assuming all PCIe slots can take NIC's
http://www.supermicro.com/products/motherboard/Xeon3000/X58/X8STE.cfm -
What do you need 6 gbps for?
Do you excpect actual throughput of 6gbps?
Or just peak traffic? How many connections?You need more RAM for more connections (about 1kb/state).
You need more CPU-power for more throughput.
Make sure you are not using PCI NICs since the PCI bus caps out at a bit more than 1 gbps.
PCIe works well.it is connection from optic fiber links
actually, no :)
too many… + video surveillance :)i had machine with 6x 100mbps in i celeron 2.0ghz with 512mb ram
and it works damn fine...
than i put 6 PCI 1gbps nics inside, and then is starts to freeze randomly.
than i add more ram (512mb more), and it was working better, but freezes al least one per day.Than i get new machine with CPU AMD Phenom II X2 550 BE 3.1ghz some random gigabyte mb with lots of PCI slots, and 4gb ram...
now on system status cpu 1%, memory 1%, hdd 1%...
and now everything works, until, i change something under DHCP server settings. Than it freezes, completely!
So im asking myself is it possible that it is still to slow machine?Because i have experience with PFS for years now, and it is just great, my feedback to you guys, 1-10? 11! So im confused, on some machines i had uptime for 300days :) and now if i touch dhcp it stuck :)
-
What NICs are you using? Is there an error (panic) on the console when this happens? This definitely doesn't sound like a lack of hardware, it sounds like you are pushing very little traffic.
-
i dont have monitor connected to PFS on new machine, but i had it connected one time on old machine (2.0ghz 512mb ram)
and i didnt show any error, actually you could work in console, but you are not able to ping anything and pfs didnt route traffic at all.So only thing i could do is restart
And it happen only when i change some data on DHCP server. No matter what, adding a client or adding ip address of wins server or second dns server…
NICs are netgear GA311, http://www.netgear.com/Products/Adapters/WiredAdapters/GA311.aspx
and yes at the moment im pushing relatively small amount of traffic (about 200 users with lots of file sharing), only in peak i get 1gbs...
I opened this topic because of this stuck after work in DHCP, i have doubt about hardware, is it enough or not :(
-
You probably have a very unbalanced configuration. A single PCI bus does not have enough bandwidth to support even a single gigabit interface at line rate. A lot of motherboards put all the PCI slots on the same single PCI bus. So, in your case, it appears likely that a single PCI bus has six gigabit interfaces and ONE of those gigabit interfaces would be more than enough to consume all the available bus bandwidth and still be hungry for more.
Try forcing ALL your gigabit interfaces to run at 100Mbps (ifconfig commands or setting 100Mbps at the other end of each of the links) and see if that makes a difference.
-
i will test that, thanks !
-
than i put 6 PCI 1gbps nics inside, and then is starts to freeze randomly.
:o Yep. That's the problem, per what wallabybob stated.
Although my recommendation above would still build a kick-ass Server. ;D
-
What you could do, is maybe work out a "better" network topology..
I know it's nice to keep things simple by having all connections go into one box, but is this the only way to achieve this? I know you are trying to get 6 optical lines in, but what are you doing with these? Are you doing a load balancing configuration? Do you have multiple subnets? Failover?
Would it be possible to split your load over, lets say 3 boxes, and route between them when required?
You'll find that while pfsense is a great firewall, there is a reason why <enter popular="" hardware="" firewall="" vendor="" here="">gear is so damn expensive. A lot of 3rd party gear have dedicated network processors (In addition to the regular CPU) which split up work efficiently, where as pfsense is just running on standard hardware.
Can you maybe give us insight into what exactly you are trying to do, and then we could maybe work out a nicer topology for you?
:)</enter>
-
it is an extended star topology and i wan to rise all network to 1gbps… it is only test phase....
i did a little testing, and network recording, and lol, it seems that i have continious network traffic per interface lower than 10mbps
about 50mbps total :)
and only in peaks i get 150mbps, maybe one per day per one sec. :)but, i will still get some server machine and try this configuration ;)
guys, again, thanks for answers !
-
i did some testing and research …
PCI have throughput around 130mbps, so 1000mbps nic dont have any sense ...
But, i try to test it and if traffic graph in PFS is accurate, i have highest throughput from one to another interface 300mbps. Which also does not have any logic, if maximum throughput is around 130mbps per PCI slot.
Also hdds in standard PCs are just to slow i had to copy files over network with several PCs.PFS machine od 300mbps is using about 25% of cpu (amd 3ghz)
It seems there is no point to force 1gbps on lan :)
thanks guys
-
PCI caps at ~130 Mbyte/s not Mbit/s (About 1000 Mbit/s)
If you achieve 300mbps this is Mbit/s.
Also keep in mind that you are measuring "throughput". –> 300mbps throughput is actually 600 mpbs load on the PCI bus (traffic going in and the same traffic leaving out).To test throughput you dont have to copy files around with multiple computers.
You can simply use Iperf. -
actually you are right, 300mbps (i know it is mbits) is on one interface in on another out, but i was sure that PCI is only 130 mbits not mbytes.
also, i want to try in live how much can i hypothecate interfaces, with live and possible traffic. -
Standard PCI can do a "burst" transfer of 4 Bytes at a rate of 33MHz, so has a absolute maximum transfer rate of 4x33MByes/sec = 133MBytes/sec or 1064Mbits/sec. Practical transfer rates are typically rather less than this because burst length is limited (to allow other devices a chance to use the bus) and a group of burst transfers has to be preceded by an address cycle. So if burst length was limited to 8 cycles there would be an address cycle before the 8 burst cycles, yielding an overhead of 1 in 9.
I think I've seen that in optimum conditions a good PCI NIC can get a bit more than 600Mbits/sec throughput and thats using big frames and counting all the protocol overheads.
Hence the attraction of PCI Express where a single slot has over 2Gbits/sec throughput and each slot is an independent bus.
-
yup done some tests again with iperf,
pfs cant consume more than near 600mbps (up to 280mbps per interface) and thats it …
from machine to machine through switch i get from 650 to 850mbps
so it seems that i have to get something faster, with PCI-e NICs :)guys did you test throughput with PCI e risers? with two double nics per slot?
what is situation then ? -
guys did you test throughput with PCI e risers? with two double nics per slot?
what is situation then ?How would risers make a difference?
Two double NICs per slot? Do you mean a quad port card?PCI Express (PCIe) is a serial bus. PCIe slots can have multiple "lanes" (typically 1, 4, 8 or 16) which multiply the bandwidth of the slot (a serial-parallel combination). If I recall correctly, a PCIe V1 lane would have about 2Gbps bandwidth while a PCIe V2 lane has about double that.
If you want a 4 port card to be able to drive all ports at line rate, both in and out concurrently, then you need at least 4 lanes. Hence you need a x4 slot (or x8 slot or x16 slot). -
no i mean double port card not quad, why? it is cheaper :)
Why would riser make difference,
well since is one slot with riser that have 2 slots, and 2 nics… so i asked...many thanks for answers!