Typical network with pfSense?
-
Yeah, but I wouldn't go so far as to say most surplus motherboards only have PCI, there's lots of decent desktop machines in surplus with PCI-Express. I recently picked up 2 machines off Ebay for, seriously, $45 shipped; one had PCI-e x16 and x1 the other had a pair of x16 (one was electrically x4) plus an x1 (although they were Small Form Factor, so low profile if you choose to leave the motherboard in the stock case.)
As for the switch portion, a lot of us have more devices on our networks than the switches built in to most routers would support, so we probably have a switch anyway. Plus the old router I was using was 10/100, most of the switches in my house are Gb (although, that wasn't exactly the case when I ditched the WRT54G 7 years ago.) My switches are a mix of smart and dumb, but they're all operating as dumb (no VLANs configured.)
I also, personally, like having the freedom that comes with separating out my AP, in as much as I can make it do anything I like, easily replace it, and/or easily add multiple (I guess it's easy to add multiple APs behind a commodity soho "wireless router", but probably not if you want wireless separate from LAN, unless you do it as a wireless extender, I guess.)
(btw, I'd rather compare it to something a bit more dependable than a Ferrari ;) )
-
Thanks for the quick replies!
Yeah, but I wouldn't go so far as to say most surplus motherboards only have PCI, there's lots of decent desktop machines in surplus with PCI-Express.
I guess I should have said the surplus motherboards I have then. ;D
I actually have several complete PCs sitting around unused. One has pretty serious power for pfSense (Intel Core 2 Duo E6600, 2 GB RAM).
I'm just trying to justify the extra noise and power consumption.
-
I use to run it on OLD P3 800mhz with 512MB ram on OLD 6GB hdd.. Thing ran and ran and ran and was still running when I move over to VM.
In a home setup your really don't need much power, I had played with snort and squid and such on the box but never really ran them for extended periods.. Well take that back squid was running for filtering porn when my son's were in that phase.. But that might been old ipcop install on my dual p2 400mhz box ;)
You would be amazed what you can do with some older pc hardware - in a home you would have little need for anything over 100mbit for wan/lan connection on the box if that is all you have available. Only reason I run gig is that is what the hardware I am using has, etc.
As to power or noise, my p3 was really quiet and only drew about 50watts or so.. I had a killawatt on it - sure its going draw more than say a 5w soho wireless router. But my N40L with 4 disks in it only draws about 55watts and that is with 3 VMs always on - NAS, pfsense and linux shell box. So it allowed me to shutdown my other P4 box I was using as NAS at the time, etc. So I saw it as 50% reduction in my power usage with more to play with, etc.
Can you put some more ram in that Core 2? If so prob make a viable esxi host that you could run pfsense vm on and then couple other vms to play with, etc.
-
Exactly. I ran m0n0wall on an old Celeron 400 for a little over 7 years. Admittedly, m0n0wall is smaller than pfSense, but mainly in the RAM requirements department, and the machine had 512MB anyway, so it would have been fine.
I'm now running a Core2Duo HP DC5700 Small Form Factor. The onboard NIC is PCIe and I have an Intel Dual Port Gb card in a PCI slot (it's a PCI-X card with the "X" portion hanging off the end; it was in a discount bin at a local surplus shop with a low profile bracket.) WAN is on the Intel Card and LAN is on the onboard BroadCom. I don't currently segment Wireless, but I may eventually with the other Intel port. Or, if it seems like the PCI bus gets saturated I could always buy a low profile PCIe x1 card of some sort; all the PCIe NIC's I have hanging around are from servers, so x4 or larger.
-
Guys, again, thanks for the responses!
I use to run it on OLD P3 800mhz with 512MB ram on OLD 6GB hdd.. Thing ran and ran and ran and was still running when I move over to VM.
In a home setup your really don't need much power, I had played with snort and squid and such on the box but never really ran them for extended periods..
I have a collection of old computers - one with an AMD Duron 1.2 GHz and 1 GB of RAM, another with an AMD K6-2 400 MHz and 192 MB of RAM and an ancient P120 with 64 MB of RAM that ran ClarkConnect and then Smoothwall back in the day. The Duron has a non-standard heatsink and noisy fan and it's in a poorly-ventilated case, the K6 likely doesn't have enough power or RAM and the P120 is even worse. The Core2Duo, although overpowered for pfSense, runs cool and the case arrangement can be made quiet.
You would be amazed what you can do with some older pc hardware - in a home you would have little need for anything over 100mbit for wan/lan connection on the box if that is all you have available. Only reason I run gig is that is what the hardware I am using has, etc.
Yeah, no reason to use gigabit right out to the WAN. I'm on 20 Mbit down/512 kbit up. I figure 100 Mb WAN/LAN cards on the pfSense box would be fine but the LAN should be connected through a gigabit switch to gigabit clients located on the LAN for file transfers.
Can you put some more ram in that Core 2? If so prob make a viable esxi host that you could run pfsense vm on and then couple other vms to play with, etc.
Sure could, it supports up to 8 GB and DDR2 is still widely available. Actually I just checked and I'm wrong, it's equipped with 4 GB.
I've never considered running pfSense in a VM but I see around this forum that a lot of people do it. I'm trying to wrap my head around the OS-inside-an-OS filtering all packets including those for the host OS and it's doing my head in. :P However it's something to consider - the Core2Duo runs Ubuntu fine, I now have Debian on it and I've played with VMs on it before using VirtualBox, it doesn't break a sweat. I've never gotten into esxi, looks interesting…?
-
I'm now running a Core2Duo HP DC5700 Small Form Factor. The onboard NIC is PCIe and I have an Intel Dual Port Gb card in a PCI slot (it's a PCI-X card with the "X" portion hanging off the end; it was in a discount bin at a local surplus shop with a low profile bracket.) WAN is on the Intel Card and LAN is on the onboard BroadCom. I don't currently segment Wireless, but I may eventually with the other Intel port. Or, if it seems like the PCI bus gets saturated I could always buy a low profile PCIe x1 card of some sort; all the PCIe NIC's I have hanging around are from servers, so x4 or larger.
Ah, this made me rethink things a bit.
Intel NICs are great because they do TCP offloading, but the Core2Duo has more than enough power to handle this especially running light. Checking online documentation, the onboard NIC (Realtek RTL8103EL) is actually connected to the PCIe bus even though I don't have any PCIe slots on the motherboard except for the x16 slot for the graphics card. So although it's onboard, it shouldn't be bottlenecked.
I have lots of spare 100 Mb PCI cards around, I could just add one for WAN or LAN and it wouldn't be bottlenecked either. They're not Intel cards and don't do TCP offloading - really they're just a PHY device like the onboard NIC, but given that it's only 100 Mbit, that the Core2Duo will have plenty of spare power, and that the onboard NIC will be on a separate, higher-capacity bus, there should be no constraints.
-
"but the LAN should be connected through a gigabit switch to gigabit clients located on the LAN for file transfers."
Again if your not going to have segments routed through pfsense - then its connection to the lan only needs to be 100 if you only have a 20mbit internet connection.. But sure for lan to lan file movement if other lan devices support gig, then yes gig switch.
pfsense (lan) 100 –-- 100 (gig switch) gig -- gig (other lan devices)
-
Ah, this made me rethink things a bit.
Intel NICs are great because they do TCP offloading, but the Core2Duo has more than enough power to handle this especially running light. Checking online documentation, the onboard NIC (Realtek RTL8103EL) is actually connected to the PCIe bus even though I don't have any PCIe slots on the motherboard except for the x16 slot for the graphics card. So although it's onboard, it shouldn't be bottlenecked.
I have lots of spare 100 Mb PCI cards around, I could just add one for WAN or LAN and it wouldn't be bottlenecked either. They're not Intel cards and don't do TCP offloading - really they're just a PHY device like the onboard NIC, but given that it's only 100 Mbit, that the Core2Duo will have plenty of spare power, and that the onboard NIC will be on a separate, higher-capacity bus, there should be no constraints.
Ahh, but don't get stuck in the "except for the x16 slot for the graphics card." mindset, don't think that just because it has a graphics card in it that it is a dedicated graphics card port (unless it's an ADD2/SVDO slot.) As long as it's PCIe, put anything you want in it, including a single port x1 card. If there's no onboard video, I'd just get a super cheap old PCI card, anything should work there (unless you need low profile, then your search becomes less easy.)
When it comes to random cards for situations where bandwidth isn't so much a concern, sure, just about any card that pfSense supports should be fine. It's possible that some may introduce some latency, but good luck noticing it. But, some may not be especially stable, but if you notice that, switch it out. If it does become a concern, even Intel PCI 10/100 cards are super cheap. If you can find a surplussing shop around you that sells old crap, I'm sure they're not much more than a couple bucks; get a few, just in case. Or, Ebay, find someone with a lot of 'em and pick up a few, consolidate shipping. Maybe find a seller with a few varieties: PCI, PCIe, dual port, etc.
It sounds like you're a bit like me in keeping stuff around 'cause it'll come in handy, so will these, since they're so well supported. (I still keep around an old DEC 21040 "Tulip" based 10Mb card around, even Windows 95 retail shipped with a driver for it, it's in my LAN Party rescue parts bin.)
Or pick up a Dual Port Gb PCIe Intel card for less than $40 on Ebay, put it in your x16 slot, get a PCI VGA card for under $10 on Ebay, and be done with it.
-
Ahh, but don't get stuck in the "except for the x16 slot for the graphics card." mindset, don't think that just because it has a graphics card in it that it is a dedicated graphics card port (unless it's an ADD2/SVDO slot.) As long as it's PCIe, put anything you want in it, including a single port x1 card. If there's no onboard video, I'd just get a super cheap old PCI card, anything should work there (unless you need low profile, then your search becomes less easy.)
Again…didn't even think of that. I forgot you could put a x1, x4 or x8 card into an x16 slot!
There is onboard video. I have a surplus PCIe video card and I'd rather not dedicate system memory to the onboard video especially since I'm running it headless but it might be a good tradeoff. Hmm...
Or pick up a Dual Port Gb PCIe Intel card for less than $40 on Ebay, put it in your x16 slot, get a PCI VGA card for under $10 on Ebay, and be done with it.
A dual port PCIe x4 card would be ideal, actually!
-
Loosing 32 or 64MB of system ram to on board graphics but gaining a 16x PCI-e slot seems like a good trade off to me. :)
Steve
-
You might consider to use state-of-the-art hardware instead of outdated stuff. Newer systems can be more energy-efficient.
One thing is that a more energy-efficient system can often be run fanless. This not only reduces the noise level, but also increases reliability.
Another thing is energy cost - at least for users who don't have an electricty flatrate. The break-even point for newer, mor expensive hardware with energy consumption is typically after two years, compared to "some old junk" which is few years old and you can get for free.
I keep the "old junk" with it's noise fans around as backup systems, in case the shiny new modern systems fail.
Another point of caution: I have experienced that many switches are unreliable. For example, I have several D-Link gigabit switches lying around which will occasionally just "hang" - network traffic doesn't get through any more and the switch has to be power-cycled. I have found that I can reproduce this phenomen simply by pushing 100MBit/s of traffic shrough the switch; it will "hang" after a few minutes. That sucks. What good is a 1000MBit switch which cannot cope with traffic of 10% link capacity?
I currently use Cisco gigabit switches. They seem more reliable. Not "Linksys by Cisco"; I've experienced issues with some of these as well!