Hardware suggestions

  • I'd like to use pfSense as a router in front of about four servers, three of which will be public but not in heavy use. Most traffic is simply email (I get about 20-30k emails a day, most of it spam that gets blocked, and I'm using iptables today to block out spamming IPs), but there's still some web traffic. I'd say I get less than 2000 unique visitors a day, with about 80% of them using an application that is not very bandwidth-heavy.

    Now, that said, I'm trying to determine what hardware I should get to run pfSense. Most of my servers are PE 2850 and 2950, so I was looking at a 1950 or 1850 (just something with a dual or quad core Xeon,4-8 gigs of RAM, and a small RAID 1), but I'm not sure if that's overkill for what I have. The majority of bandwidth will be between servers, so it'll just be passing through the switch instead of hitting the router. Based on this, are there any recommendations on what hardware is good for pfSense and my scenario?

  • @jhilgeman:

    I was looking at a 1950 or 1850 (just something with a dual or quad core Xeon,4-8 gigs of RAM, and a small RAID 1),

    You haven't specified a WAN link (to Internet) speed. A 256MB Alix is reported to be capable of supporting 85Mbps through the box so the systems you are suggesting are possibly way overkill.

    Why do you propose RAID? Are you planning to run some pfSense packages that use lots of storage?

  • It's probably going to be a 100Mbps uplink, but average bandwidth is probably going to be under 25Mbps.

    I'm just using RAID 1 to reduce HDD failure risk.

  • @jhilgeman:

    I'm just using RAID 1 to reduce HDD failure risk.

    You could use the "nanoBSD" version of pfSense which will run off a Compact Flash card and mostly runs with the "disk" mounted read only (to minimise number of writes). You could send logging information from pfSense to one of your servers for recording on a hard drive. You can backup the single pfSense configuration file to a hard drive to make reconfiguration in case of flash Card failure a simple procedure.

    For about 3.5 years I have been running a "full" pfSense install off a 1GB solid state disk module plugged into an IDE connector on the motherboard. I have a similar module plugged into the other IDE connector and from time to time I do an image copy from the boot disk to the other one. In case of hard drive failure (or "finger trouble" messing up my system) I just swap the modules.

  • Also, I'm open to suggestions on hardware, if someone has a good recommendation. I'm not tied to PowerEdge - it's just what I've used in the past for most servers, and I can find a decent one for about $200.

    I've found some really old ones running on Pentium 3 or 4 for $50, but I figure that might be too old…

  • I run a small ISP that serves rural clients wirelessly - we have roughly 600 users including industrial sites like natural gas plants, coal mines, and lumber mills.  I run dual PE R300's using CARP for backup, and soon to run BGP for a multi-homed 100 Mbps fiber connection to the web using dual Intel quad port gigabit nics.  These also host our corporate network behind them, with 9 servers (one heavy video use) and are OpenVPN servers for our field staff.  They perform great, and as others have said, these two little guys are overkill.  I have 6GB of RAM in each and as pfSense is 32-bit it ignores anything above 4GB.

  • Beware, wall 'O text ahead:

    Dell PowerEdge 2850's are all SCSI, and at this point, they're old and I wouldn't trust the drives to run too long (and they're expensive for what they are.)  1850's are also SCSI, and with only 2 drives.

    Either one could easily take an IDE or SATA card and find a place to mount a CF card with adapter, but I'm not sure how they'll deal with booting from an "add on" card.  And you'll need to find a way to power the CF adapter (no loose power in a 2850, maybe a small one by the Floppy bay.)  If I wasn't mid move I'd test on the 2850's I have at home, but It'll be a while before I'll have free time to try that.  They will boot from USB, though.  I do that for a few old Dev/Test VMWare hosts.

    I have run m0n0wall on a 2850 before, did the CD option and kept the config on a floppy.  Since it rarely needed a reboot, the floppy path worked fine.  You could do that here, keeping the config on floppy or USB.

    Keep in mind, Dell 2850's take up a -LOT- of power, even when idle.  They don't spin down the drives when not in use.  The SCSI drives will mostly likely be 10K or 15K RPM, they get hot, hot = power use. (Although, if not installing to the internal drives, pull 'em.)

    2850's and 1850's have single core CPUs, although dual sockets and often both are populated.  They'll take about 12GB of RAM easily, 16GB with 4GB DIMMS (do not try to populate all 6 slots with 4GB DIMMS for 24GB, doesn't work.)  They can run VMWare fine if you want to also use the hardware for other uses, but even though the processors are 64bit capable, they don't support VT, so no 64 bit VMs.

    2850's and 1850's have PCI-X slots, they're the long 64 bit PCI slots, but they'll take regular PCI cards (not PCI-Express.)

    2950's and 1950's have SAS backplanes, as such, they can support SAS or SATA drives.  They should have a PERC5 SAS card, they can support both SAS and SATA drives, but not mixed in a single array (you can't mirror a SAS and SATA drive together.)  While the drives are getting old in these machines, they're easily replacable with commodity hard drives or SSDs (or SATA to CF adapters, if you have a bracket to get it to fit in the hot swap tray.)  They'll boot from USB just fine.

    2950's and 1950's take a good chunk of power, not quite as much as 2850/1850's.  They'll be dual or quad core processors and there's 2 sockets on the board (maybe both are populated.)  PFsense doesn't seem to benefit much from more than 2 cores, so a single dual core proc may be your best power/performance sweet spot.

    2950's will take 32GB of RAM, not sure about 1950's, maybe similar.  (In theory some 2950's will take 64GB, there are reported success stories, but it's a gamble.)  They can run VMWare fine, the processors do support VT-x, so they can run 64 bit VMs.

    2950's have a few PCI-Express slots.  I think they're mostly x8 slots, but they may be wired as x4, can't remember off the top of my head.  1950's may have 1 or 2 PCI-Express slots.  Neither should have standard PCI or PCI-X slots.

    Of course, going further back, there's Dell 2650's, they're also SCSI based, so you have the same hard drive worries.  They're P4 Xeon based, dual socket, up to 12GB of RAM, dual Broadcom GigE, no x64 support at all, PCI-X.  Not sure if they'll boot from USB.

    1750's were also SCSI based, Hyperthreadding P4 Xeon, PCI-X, up to 8GB, dual Broadcom GigE, dual power supplies.  Maybe they'll boot from USB.

    1650's weren't the 1u version of 2650's, btw.  They were PIII based, mostly an upgrade from the 1550, takes 4GB RAM, SCSI, etc.

    If I recall correctly, 850's and 750's weren't much more than a motherboard in a 1U case, SATA or IDE on board, or SCSI via optional card, but no redunant power or anything.  850's were either a PentiumD or P4, up to 8GB RAM, dual Broadcom GigE, could be had with PCI-X or PCI-Express.  750's were single P4 or Celeron, up to 4GB RAM, dual Intel GigE.

    1550's are PIII based.  I actually have a few at work that we still support (not by choice.)  Dual Intel 10/100 Ethernet, 2 PCI-X slots, and 3x SCSI drives.  They'll take up to 2GB of RAM, dual power supplies (when equipped.)  Expect to replace the BIOS battery (cheap.)  Again, you might be able to put in a SATA card to boot from, or hack up the IDE CDROM cable for a CF card.  Don't expect to boot from USB.

    2450's and 2550's were dual PIII based, some socket, some slot, all SCSI.  Takes 2 or 4GB of RAM, PCI or PCI-X slots, dual power supply (optional.)  Maybe boot from a card, etc.

    So, choosing a low cost server that, at least used to be, enterprise level I'd probably do a 1950 with dual power supplies, single socket (dual or quad core), and a dual port Intel PCI-Express NIC.  The onboard nics should be Broadcom, but they're decent Broadcom nics.  I'd use 2 small SSDs and mirror them with the PERC 5 (might be a 5ir, which only does mirror or striping, doesn't do R5, which should be fine for you.)

    Or, if you want to maximize your use, and I don't know your situation, take one of your 2950's, virtualize whatever was on it, run VMWare ESXi on it and run PFSense as a VM alongside the original "server".  But, that's just me seeing your world as another nail for my VMWare hammer.

Log in to reply