Intel x520 and pfSense 2.1
-
I have been struggling to get these to work properly. Whenver i start doing VLANs on them they go from okay to utterly useless with pfSense.
I have read and the intel readme and quite a few articles about the nmbclusters and that has helped a little
then by itself without vlans i can only push about 600Mbits accross a link.
Hardware wise im using a Dell C6100.
- 2x L5520 (quad core + ht)
- 32g RAM
- 2x 1GBics
- 1 Intel x520 as a PCIe Card
- (looking into a 10g Menz Card)
Currently we use the following:
3x Quad port gbics
+4 on board gbicswe effectively have 4x trunks of 3gbps that do multiple vlans each
3 onboard are for wan
1 direct accessIf i can focus all the traffic down to 2 10g port as vlan trunks it would save me and my boss a rather large headache.
We did get a version of it working but it required us to sit pfSense on top of vmware but the traffic is still problemmatic at just under 500mbit of traffic
Were not seeing high processor usage so im doubting we are having a cpu bottle neck but i could be wrong
My Final hardware will be 2x Dual Proc Boxes each with 1 x520 Each with the ability to add more later if need be.
my budget is 10 grand total for the two boxes + the SFP+ adapters
Any recommendations?
-
Suspiciously, 500MB is the upper limit of a single lane of PCIe 2.0
(I'm assuming its not actually 500mbit, since thats sorta slow)
I'm guessing you are hitting the limit of a PCIe slot that something is plugged into somewhere.
That would explain the lack of CPU load. -
Suspiciously, 500MB is the upper limit of a single lane of PCIe 2.0
(I'm assuming its not actually 500mbit, since thats sorta slow)
I'm guessing you are hitting the limit of a PCIe slot that something is plugged into somewhere.
That would explain the lack of CPU load.I thought it was this also but the slot is a PCIe x16 Slot. I can reproduce this same issue on two seperate motherboards. This is why i believe there might be a setting im missing somewhere that pertains to this card
-
That speed could also correlate to the max speed of a not too swift SATA drive dragging things to a halt.
Having RAID+NIC on the same channel?
I also see people hitting that exact same limit with same hardware running under ESXi.
(Not PFsense though)
http://communities.vmware.com/thread/392477?start=0&tstart=0
http://en.community.dell.com/support-forums/servers/f/956/p/19480498/20249003.aspxGood luck.
-
We are going to try to redo the base install of pfsense on the box. We are going to exclude the igxe drivers and rebuild them with the current ones from the intel site. we will see if its any better as a non virtualized enviroment.
as far as hardware is concerned no one sees any problems with the specs that would cause these problems.
also we are not running raid (thou we plan on a mirrored raid to cope with a hard drive failure)
-
Hi guys
I've had the same issue with current snaps that contain a more recent ixgbe from around FBSD 8.4.
You have (ATM) to disable VLAN filtering in hardware, Jim and Chris have been kind to update the wiki:
http://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_CardsWhen I executed the command I was able to receive packages again, I guess this may come at some perf. disadvantage but at least things are solid this way.
If you happen to be able booting a vanilla 8.4-RELEASE and could test on your X520 that would be great, because right now I don't know if this is an outcome of the driver backporting or a general issue with the ixgbe driver.
-
If you want to try the 2.5.8 version of the ixgbe driver:
fetch -o /boot/modules/ http://files.nyi.pfsense.org/jimp/ixgbe-2.5.8/amd64/if_ixgbe.ko
or
fetch -o /boot/modules/ http://files.nyi.pfsense.org/jimp/ixgbe-2.5.8/i386/if_ixgbe.ko
-or-
fetch -o /boot/modules/if_ixgbe.ko http://files.nyi.pfsense.org/jimp/ixgbe-2.5.8/amd64/if_ixgbe-no_tso.ko
or
fetch -o /boot/modules/if_ixgbe.ko http://files.nyi.pfsense.org/jimp/ixgbe-2.5.8/i386/if_ixgbe-no_tso.koAnd then toss this in /boot/loader.conf.local
if_ixgbe_load="YES"