Is this a good unit for pfsense?
-
Long story short I currently have a watchguard 550e currently running my network, and its a little slow. Great unit, love it to death, but the throughput over the "gigabit ports" seems to max around 300Mb/s. I was surfing around ebay, and found this nice unit…
www.ebay.com/itm/121063789838
My question is how good of a router would that be with pfsense? I would also throw in a 4 port Intel NIC, specifically this one:
www.amazon.com/Intel-Server-Adapter-PCI-E-EXPI9404VT/dp/B002JLKNIW
I mean, that 1U server seems to good for the price, so that's why I thought I would ask before I went shopping.
-
I don't think it would be a good low power usage server, but it will work nicely for a higher performance FW. Looks like it would push gigabit line speed.
-
You'd need to throw a hard drive or two into the unit, but the processor, chipset, and RAM should do a very good job with six 1Gb NIC ports. The 4-port board you've chosen is PICe 2.0 x4, and that's enough bandwidth. The bandwidth between the chipset and I/O hub (where the onboard NICs are connected) is as x4 lane, so no worries there. But depending on where you seed the card you'll either be talking directly to the I/O hub or to the chipset and then the I/O hub. It doesn't make a difference performance-wise, you'll never max out any of the lanes with your configuration.
To podilarius's point, it won't be energy efficient or quiet, but it'll handle a decent load.
-
Thanks guys, power is not a problem here, and I have a drive for it laying around.
Parts ordered, can't wait for my new overkill firewall! Thanks again
-
looks to be a nice motherboard with builtin GigaBit intel Nics
THOUGH its gonna be noisy… theres 4 fans behind where the processors are...
and at LEAST 1 fan in the power supplyi would NORMALLY say you could probably unplug a few fans and watch the temperatures
on the CPU's BUT you got FB-DIMMS in that motherboard and they run HOT. so FORGET
that idea.if Yours comes with the Management card that is pictured , you can put that to use too.
definately could load this critter up with squid and lots of packages and not slow it down..
thats pretty overkill for most links, i cant see this system drawing less than 1.5Amps continuously
(i have similiarly configured pfsense boxes drawing up to 2Amps)keep us posted on how you make out.
-
@SunCatalyst:
i have similiarly configured pfsense boxes drawing up to 2Amps
At the 3.3V CPU core line? :P
I assume you mean at the 120V supply but perhaps you mean 230V? Big difference. ;)
Steve
-
Lol….
120Volts.
-
Have you determined why it max's out around 300Mb/s?
Maybe some other piece of equipment or configuration is the culprit. -
The standard processor in the X550e is a 1.3GHz Celeron-M so that may be restricting it directly. The other likely cause may be that the 4 Gigibit NICs in that box are all on one PCI bus unfortunately. :-\
Steve
-
Not knowing the x550e at all…
cpu looks to be in a socket, either the Celeron M or a Pentium M..
Appears to be a PCI-e slot that maybe could be used for intel Nic? -
Yep, a CPU upgrade is easy and these days very cheap.
Fitting a different NIC card is not straight forward. The format of the expansion slot appears to be proprietary. Some people have done with some case modification and flexible pci-e cables.
An interesting question that I have never really got a complete answer to is: What is the theoretical maximum throughput of two PCI Gigabit cards on the same bus? Assuming a 32bit 33MHz bus, standard desktop.
It's hard to know exactly what the bus in the X-Core-e firebox is, or at least I'm yet to discover the correct incantation! Could be PCI-X for example. However the NICs are Marvell 88e8001 which are not PCI-X capable.I'd be interested in any further thoughts on that.
Previously discussed here: http://forum.pfsense.org/index.php/topic,43929.0.html
Steve
-
An interesting question that I have never really got a complete answer to is: What is the theoretical maximum throughput of two PCI Gigabit cards on the same bus? Assuming a 32bit 33MHz bus, standard desktop.
Standard desktop PCI does 33M bus cycles per second. A bus cycle can transfer 4 bytes (32 bits) of data. Hence the maximum achievable throughput is 32 x 33M bits per second or, in round figures 1Gb per second.
HOWEVER, a standard PCI DMA transfer consists of an ADDRESS cycle (specifying the memory address to be read or written) followed by a DATA cycle so limiting throughput to around 500Mbps.
SOME PCI devices can operate in burst mode: an ADDRESS cycle followed by a number of DATA cycles. The length of the burst is usually limited to ensure no device hogs the bus.
I haven't looked at the details of PCI operation for a while but I think there might be a requirement for one or more "idle" bus cycles after a transfer to allow the bus to be acquired by other devices.
Simple answer: At best, no more than a bit under 1Gbps; in practice probably considerably less.
-
Right but that would be just one PCI NIC moving data to or from the hist machine. If you have two NICs in a routing configuration on the same bus I would expect the throughput to be half that or less.
What I could never pin down is whether it should actually be a quarter of the maximum bus speed due to having to send any return data.
Looking back at my own results the box I used could only manage 200-250Mbps between two the PCI NICs. It seemed like it was a quarter bus speed. :-\Kind of hijacked this thread. Apologies to ddggttff3.
Steve
-
Right but that would be just one PCI NIC moving data to or from the hist machine. If you have two NICs in a routing configuration on the same bus I would expect the throughput to be half that or less.
What I could never pin down is whether it should actually be a quarter of the maximum bus speed due to having to send any return data.
Looking back at my own results the box I used could only manage 200-250Mbps between two the PCI NICs. It seemed like it was a quarter bus speed. :-\Without some form of logic analyser it is probably difficult to determine average or typical DMA burst lengths on the bus. And the amount of "return data" including TCP ACKs is highly application dependent.
I think in many cases a bit of analysis is useful to determine if a configuration is going to "work", rather than try to guess to the nearest 10Mbps the throughput that might be able to be achieved. In another topic still being discussed, the author is complaining of not getting Gigabit speeds between two interfaces. From additional information posted it looks as if the pfSense box is one of the generation where all the IO devices were hung on a single PCI bus or an ISA bus. The analysis shows there is no way to sustain 1Gbps between two NICs on the same PCI bus, let alone a bus also shared with disk controllers, ISA bridge etc.