Warning: Dell PE R410 & R510 Servers No Good for pfSense
-
Hey all,
I wanted to post my findings on using pfSense 2.0 on the Dell PowerEdge R410 & R510 Servers. This is in follow up to my previous thread:
http://forum.pfsense.org/index.php/topic,35608.0.html
Both systems have (2) integrated Broadcom NICs (bce)
The R410 I have includes an add-in PCI-Express Dual Port Broadcom NIC (bce)
The R510 I have includes (2) add-in PCI-Express Quad Port Intel NICs (igb)I could not get pfSense installed at all on the R510 with the Add-in NICs installed (Lock ups at varying steps of the installation)
I could get pfSense installed without any of the add-in NICs installedI was able to get pfSense installed on the R410 with the Add-in NIC installed. However, selecting option 99 to install to the Hard Drive took about 15 minutes to give me the option to do a Quick Install. (Without the Add-in NIC installed, this was seamless)
I did no further testing with the R510.
On the R410, almost every configuration change that I made in the Web GUI caused me to lose access to the web gui for about 5 minutes (even if I restarted the webconfigurator from the console).
I tried every combination of options in the BIOS that I could think of and I tried enabling "Device Polling". Neither made any difference.
Now, the problem could be entirely with the Add-In NICs. I did not test without the Add-In NICs (as I need more than 2 NICs).
At this point, I'm debating loading ESXi and running pfSense as a VM. (I already tested this on the R410 and it installed fine. Didn't actually try to start configuring pfSense though)
On the R510, I'm debating using Untangle (This installed and configured fine)
It seems to me there is some incompatibility between these servers and freebsd 8.0. It feels like an IRQ issue, but I couldn't make any changes to alleviate it.
Just wanted to post this so everyone is aware of the possible issues with the new Dell R Series Servers.
Jeff
-
Unless we have access to the hardware to run tests, there isn't much we can do about that aside from pure conjecture/guesswork/etc.
Have you tried loading plain FreeBSD 8.1 (or 8.2) on there?
-
I had Dell T110 and BGC dual port cards x 2 + built-in BGE port
someone advice me to increase the kern.ipc.nmbclusters value to fix the WebGUI access problem. I think the value must at least bigger than the total mbuf value. (see red text). I got tons of mbufs denied before changing from 25600->51200
#netstat -m
33155/1411/34566 mbufs in use (current/cache/total)
33153/1365/34518/51200 mbuf clusters in use (current/cache/total/max)
33152/768 mbuf+clusters out of packet secondary zone in use (current/cache)
0/77/77/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
82883K/3743K/86627K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines -
jimp,
I tried installing FreeBSD 8.2 today. The Stock FreeBSD did not recognize the SAS 6/ir RAID Controller. So I couldn't even get past that hurdle. Not sure why pfSense recognized it without a hitch? Anyway, I've fallen behind too far on other server builds to spend too much more time on this. For now, I'm just going to leave our existing Internet Firewall on the same hardware and upgrade to 2.0. For our internal firewall, I have to do something because I need more interfaces. I'm debating between untangle and vyatta right now. (Both install/run fine on the R510)
beafool,
Is that setting available in the GUI?
Thanks,
Jeff
-
I added mine via GUI under advance setting and system tunable. For the netstat command, I think you can run via GUI after install command prompt add-on.
-
Interesting… we're using an R210 and it's working like a dream.
-
Oh sure… Rub it in ;)
-
I may have a breakthrough…. :)
Based on what I read in another thread, I tried several different keyboards on my R410 server. I came across a keyboard that consistently produces a smooth install and configuration. The only difference between this keyboard and the other 3 that I tried is that this one is 5v, 1.5A. Whereas the other keyboards are 5v, 100 mA.
I never would have guessed that a different amperage on the keyboard would affect the software... (especially since linux installed fine)
I'm going to try this keyboard on the R510 and post back my results next week.
-
Surely it's not the keyboard…
-
I guess I got to hopefull :(
Tried the same keyboard on the R510 and the install has hung up at "Configuring WAN Interface…"
I may go back and give ESXi another go...
-
Back on the R410…
Swapped out 4-port PCI-e Intel NIC w/ 2-port PCI-e Broadcom NIC (didn't need that many interfaces)
Now the install is hanging during the install to the hard drive again...
I give up on these freaking boxes!!!!
-
Load ESX 4.1 and build a VM pfSense
-
i second that
pfsense runs very well on esxi with dell Rxxx
-
I did try that and I posted in another thread that I saw considerable (at least what I think) throughput loss compared to pfSense installed on bare metal (I was able to install pfSense on the R410 without any of the PCIe NICs installed). I saw approximately a 50 mbits/sec loss using ESXi. For completeness here is my testing methology:
Using iperf ("iperf -s" on 1 laptop and "iperf -c x.x.x.x -t 30" on the other laptop) with a crossover cable between 2 vostro laptops yields an average of 172 mbits/s. I did the same test thru the following firewall distros: Here are those results:
Laptop -> Laptop via Crossover cable
172 Mbits/sec
pfSense on bare metal
Intel -> Broadcom = 140 mbit/sec
pfSense on ESXi using e1000 (with or without vm tools installed)
Intel -> Broadcom = 85 mbit/sec
Astaro
Intel -> Intel = 165 Mbits/sec
Intel -> Broadcom = 147 Mbits/sec
Broadcom -> Intel = 132 Mbits/sec
Broadcom -> Broadcom = 140 Mbits/secVyatta
Intel -> Broadcom = 114 Mbits/sec
Untangle
Intel -> Intel = 165 Mbits/sec
Intel -> Broadcom = 160 Mbits/sec
Broadcom -> Intel = 200 Mbits/sec
Broadcom -> Broadcom = 200 Mbits/secNote: the NICs in use also made a difference. When I did the pfSense test, I only tested going from Intel -> Broadcom.