Ideal Hardware Setup & General Question



  • Hi All,
    I am a few steps away of ordering hardware for our new pfsense box. There are some questions mixing up between hardware & pfsense abilities in here, I apologise for that.
    I am struggeling with a few things I hope you can help me with.

    1. We intend to setup a box with 4 NICs. 1x LAN, 1x DMZ, 2x WAN "static" Load Balancing if you like so, routed depending on the services used (one static one dynamic ISP Link).
      Question: can I do the routing e.g. Bind squid to WAN1 and use e.g. SSH & SMTP via WAN2 and make a routing decision based on the used protocol?
    2. We want maximum throughput from the LAN to DMZ, basically 1GBit Ethernet using e.g. Intel NICs to keep CPU Load low. (Correct me please if there is a better product you recommend)
    3. We would like to have SSD for the OS Disk
    4. We intend to run a squid on the Box so one HD is required.
      I had the idea to build that with "standard" hardware, the problem I am facing is only a very few MoBos with 4x PCI-E x1. Next idea was an Intel Quadport card on a x4 Slot, where I am simply not sure if it would deliver the performance required.

    5)Is there a manufacturer building servers that fulfill the requirements above especially quad GbE  & SSD disks? Basically the Dell R200 would cover what we require but no SSD. (My Boss wants SSD as he does not trust HDs to be reliable enough at such a single point of failure…)
    6) Or alternatively is there a recommendation of Motherboard & NIC combination for such a scenario.

    I appreciate your feedback.
    Thanks,
    data



  • Discussions on hardware sizing can be found through the search function, and the documentation, such as here.

    You should be fine with some onboard NICs and a 2 port PCIe NIC, or a pair of 2 port PCIe NICs.  Remember to get the server grade NICs where available.

    As for other hardware.  Your boss is worried about the hard disk being the single point of failure, but is happy with only a single power supply?  Frankly he's not thinking.  A far better choice would be dual power supplies and mirrored disks.  I've had boxes in that configuration running for years without any downtime caused by hardware failure - during that time I've seen many power supplies and hard disks fail.  Also, the reliability of SSDs is a long way from proven.

    I'd suggest that (if you're looking at Dell) the 1950 is a better choice.  For performance go with 4 x 2.5" disks as RAID10 (that'll get you up to 600 GB of disk space) or for space go with 2 x 3.5" disks in a RAID1 (1 to 1.5 TB disk space).  With a dual core 3.5 GHz processor, 4 GB of RAM and a dual port Intel NIC you should be pretty close to being able to push 1 Gb of traffic (though, as is often said, the nature of that traffic and your rules makes a big difference).



  • Alternatively buy two R200s and set up CARP.

    I agree with Cry Havok, though my personal experience is that well-engineered PSUs are much less likely to fail than hard drives, it's still a single point of failure with moving parts. And mirroring is probably cheaper than SSD in a real enterprise-ready context anyway; I'm not even sure SSD has dropped yet in enterprise hardware, I think it'll be another generation or two before we see it become widespread there. Anyway, if you really want to pursue the cheap Dell with SSD option, it is possible. Get the 2x PCIe 8x riser, drop in an Intel dual or quad adapter and in the free slot plug in a PCIe -> mini PCIe adapter and a mini PCIe based SSD.

    Intel's quad adapter is a PCIe 4x device. That's 1GB/s bandwidth, which is just fast enough for all 4 adapters running full steam in both directions at once. Okay, you might be slightly limited, but in the real world, you're never going to have 8Gb/s running through them.


Log in to reply