Question: what motherboard are you guys using to handle 5 nics.
i am planning a firewall that will have failover, DMZ, lan1, Lan2, Wan and i am having some difficulty trying to figure out which board to use as most boards don't come with 5 pc1 slots.
Just looking for suggestions on motherboards in regards to reliability etc
buy a card with 4 ports on 1 nice. there is also a new gigabyte board with nvidia 680i chipset and that has 4 ports as standard
I'm liking the TYAN S3095. Two Intel GB and a 10/100 onboard. One PCI slot, one PCI-X4 slot, and a mini-pci. Flex ATX factor. Getting 5 interfaces would require a dual-port PCI card or a PCI NIC plus a PCIX nic. I have an old ASUS full ATX board with five PCI nics in it that's running fine. I just disabled everything I didn't need- parallel port, USB, etc.
Maybe some kind of appliance hardware would suit you better (they are usually 19" and make some noise). Also consider that you need some CPU speed and fast PCI buses if you want to push a lot of LAN to LAN traffic, maybe even something in the gigabit range.
@dotdash: Do you have the TYAN S3095 in use?
If so, have you measured the throughput between the two gigabit interfaces?
valnar last edited by
If any or all of these are going to be Gigabit, and need to push gigabit simultaneously, then you would have problems with several Gig NIC's on the PCI Bus. Get a server class motherboard with multiple PCI or PCI-X busses, as well as PCI-Express. Separate them that way.
Supermicro has some great combinations like that which will fit your needs. The PDSME+ is a great board, for instance.
On the Tyan S3095 the two Intel gigabit ports are connected via PCIe so I'm interessted about the performance.
I have several boxes deployed with the 3095. I haven't tested the throughput between the interfaces. I haven't had any setups where I had to send enough data through them to stress the box either. If you know of a quick test I can run, I'm putting together a couple this week. BTW this is the same board they use for their barebones appliance box, Trophy NR18, but I haven't seen any benchmarks on that either.
This would be great and I think this little benchmark would be interessting for others too.
You can run iperf which is included in many linux distris. It's a little command line tool which must be started on one pc attachted to the first lan port of the 3095. On an other pc attached to the second gigabit port it must be run as client and after a short time it shows you the throuput. While running the test, the cpu usage of the pfsense box would be interessting.
Of course it only makes sense if the two attached pcs either have a proper gigabit lan port.
After this test you can connect the two pcs directly with each other und repeat the test so you can compare the results of the throuput with/without the 3095 and we know if the 3095 is suitable for high throughput usage ;D
How much memory and cpu power do you have in your boxes?
PS: Can you recommend a nice 1U case properly with little depth?
I'll see if I can snag a couple of machines to benchmark with, no promises tho. The boxes have 512MB and a Celeron M 410 (1.46G). gtweb.net has a nice short depth (13.5") 1U I've used before, but I'm using a 2U for these. The 1U limits the box to a single PCI card, via the riser card, and I would need to find a good 1U EPS 12v Power supply.
Have you already done a little benchmark with the S3095?
I'd love to hear what the power draw of that 3095 box is as well (and what PSU you use) – I'm trying to decide between the 3095 and the MSI MS-9642 (945GM2).
Haven't had time for testing in the lab lately. I did get some test machines set aside, and with a little luck, I'll be able to run some benchmarks this week.
P.S.- I'm using an AMS Mercury 460W EPS-12V Power Supply. It's overkill, but it seems to be bulletproof. Stable voltages, appears to be very well built.
Ok, some tests. My methodology with this was to grab two machines- approx C1.8, 512mb, and drop in an Intel Pro/1000 GT nic. Loaded FreeBSD 6.2 on them, pretty much stock, and installed iperf 1.7 from source. All tests were done with a simple iperf -s on the target machine and iperf -c x.x.x.x on the client. The client was behind the LAN and the server behind the WAN. Results varied a bit, so I ran three tests and averaged.
Test boxes via crossover cable 472 Mbits/sec
Test boxes via GB switch 478 Mbits/sec
Tyan 3095 running 1.2 beta1. em0=lan em1=wan crossover cables to boxes.
Standard config, except for changing LAN and WAN addresses.
398 Mbits/sec. CPU didn't go above 4% during the tests. Disabled the firewall and re-tested, no change in speed.
While I had the lab setup, I did a couple of other tests just for grins. I substituted a Cel550 test box
for the 3095.
With onboard fxp0 and add-in fxp1: 92.5 Mbits/sec. CPU got up to around 30%
Two Netgear cards, sis0 and sis1: 93 Mbits/sec. CPU usage was similar but lasted 2-3 times longer.
Two cheap realtek cards, rl0 and rl1:78.7 Mbits/sec. CPU peaked for slightly longer than the sis.
Standard disclaimer: Your results may vary, my test-ology is not perfect and I could have screwed up a number of things. I did not account for various factors including: moon phase, ambient temperature, humidity, solar flares, backround music, etc.
Thanks for testing. So the performance of the 3095 is not bad.
And I think the throughput will be even a bit higher if the two machines generating the traffic would have the lan interface also connected via PCIe and not via slow 32bit PCI bus.
Interessting is the very low cpu usage. Have you also monitored the momery usage?
Box was headless and connected via crossover, so I wasn't watching it live. I plugged a laptop into LAN after the test to check the RRD graphs to see what the CPU did.
Ok no problem. I will order the same board soon and when I've done some performance tests I'll post them.
Here are the things you should know before getting the board:
- Update the BIOS. Initial version has flakey fan sensor readings.
- Board needs an EPS-12V PSU
- Board does not have standard P4 heatsink mounts, you have to get a Pentium M heatsink.
Ok thanks for these hints.
I also need a system with a lot of Nics.
2 WAN, 1 LAN, 1 VOIPLan, 1 DMZ, 1 CARP, 1 WIFI,= 7 Nics
I am planning to use the Intel INTEL PRO/1000 PT PCI-E adapter as well as the INTEL PRO/1000MT PCI adapter.
I've considered the two Asus boards below, mostly because of the high number of PCI-E slots.
I only buy Asus boards…
Asus M2N-SLI Deluxe (AMD AM2) 4 pci-E and 3 pci + 2 onboard gbe
Asus P5N-SLI (Intel LGA775) 5 pci-E and 2 pci + 1 onboard gbe
I bought the m2n but havn't been able to get the onboard nics to work yet.
I've posted that tread here: http://forum.pfsense.org/index.php/topic,4902.0.html
I can't help you with your nvidia nics but my impression is that you should use intel nics for a stable environment to have less trouble.
And in my opinion it would be better to work with vlans instead of using so many nics for your internal lans. To do so, your switches must also have vlan support, of cource.
I have not heard good things about Nvidia chipset NICs and FreeBSD. Asus generally uses Broadcom on their server class motherboards, I think Broadcom NICs would be preferable to the Nvidia ones. I was considering an ASUS board myself, but couldn't find a small form factor board with Intel nics… Also, consider alternatives to using a PCI or PCIe slot for everything. For example, the Tyan board I was talking about earlier only has 1 PCI and 1 PCIe slot, but I could do everything you described using the 3 onboard NICs, an add-in PCI and PCIe nic, the onboard firewire for CARP, and a mini-pci wireless card. For WANs, you could use a cheap dual-port fxp card in one PCI slot, and for faster interfaces, a dual-port PCIe Intel card. So you might be able to do this without needing five PCIe slots.
@dotdash: I ordered all componentes except for the heatsink. Please tell me what heatsink you used in yout 1U case.
It was a Dynatron I31 active cooler. They also have a passive heat sink I51 that fits.
… the onboard firewire for CARP...
can you use firewire for CARP?
ethernet over firewire doesn't seem to work with pfSense. the fwe driver has been taken out of the kernel some months ago
I got CARP running over firewire using fwip, not fwe (which seems to be non-standard).
I have some notes here: