Intel X520-DA2, kernel: CRITICAL: ECC ERROR!! Please Reboot!!
-
do you have any howtos?
-
YEs. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
-
YEs. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004088
this guide is about host link aggregation. i need link aggregation on guest side.
-
You asked for teaming on ESXi…not on PFSense.
Host link aggregation present multiple physical adapters as one to the guest.
Have you tried seperate physical adapters and LAGG them in Pfsense? Present more than one to the guest on ESXi?
-
yes. it's possible just if i use pci pass-through. i try to check option with centos + kvm + openvswitch as hypervisor
-
hi all,
for now one stable solution that i found is using ESXi. Any other options, that i checked, not provides stable 10Gb connectivity. so I'll try to write guide to create same setup as i have, or you can use it to create your variants. i'm sorry for my english.
so, first is my hardware configuration:
two Dell R620 servers with:
- quad core 3.3 GHz CPU
- 32GB ram
- X540 DP+I350DP on board and X520 DA2+ network adapters
- two SAS 146GB HDD in RAID1I used VMWare ESXi 5.1 Dell customized ISO (it's includes all required drivers)
Step 1 (ESXi Install):
Install ESXi and configure 1 network interface for management access (i used one 1Gb port)
connect to ESXi using VSphere client and lets configure networking.Step 2 (Networking configuration):
create three additional virtual switches (WAN, LAN, Interconnect). i used following interfaces:- 2 ports X520 for WAN
- 2 ports X540 for LAN
- 1 Port I350 for interconnect
If you need VLANs in Firewall configure promiscuous mode on Virtual Port Group (VPG) and not on switch.
For VLAN trunking use VLAN 4095 on Virtual Port Group. Don't forget about vmware limitation of 10 NICs per VM.
Step 3 (VM Creation)
Create VM (Freebsd 64bit) with following configuration:- 2 Sockets x 4 cores
- 6GB or more RAM
- 30GB or more HDD
- one E1000 network card connected to VPG with internet access ( for vmware guest tool install)
Step 4 (PFSense setup)
- Start VM, connect PFSense iso to it and install pfsense as usual.
- configure internet access. (WAN only configuration)
- enable SSH access (This is not really necessary for VMware Tools, but will let you easily use cut-and-paste into a putty terminal window for the following shell commands)
- Install vmware tools using http://www.ataru.co/computing/linux/pfsense-2-1-install-vmware-tools/
- shutdown pfsense.
Now you can add VXNET3 interfaces and you will have 10Gb connections. Change in your configuration backup file NIC names and remove lagg interfaces
this setup not supports LAGGs, but you don't need it if you use it for failover, failover between interfaces will be done by ESXi.
P.S. if you find any mistakes, write me and i'll update this guide.
-
Hi wladikz, thanks for the detailed instruction, can you please update the link?
Thank you,
– Ivan- Install vmware tools using http://www.ataru.co/computing/linux/pfsense-2-1-install-vmware-tools/
-
Hi Renato,
The modules do not work, but your hint made the trick!
I built versions >= 2.5.13 with IXGBE_LEGACY_TX option defined to make it work with ALTQ.
Thanks a lot!
–
Ivan -
We have dell R260 with Intel x520 on-board.
We got 10Gbit link configuration with VLANs, iperf reports 9.4 Gbit/s.
Standard fpsense 2.1 suppied with ixgbe-2.5.15 did not work with ECC error.
The latest driver from November 5 from FB10 source https://github.com/freebsd/freebsd/tree/1440b0c5298e57d592534c87f2ccff9841c4db42/sys/dev/ixgbe addresses the VLAN issue, but requires some patching to get compiled under FB8.3.
Also IXGBE_LEGACY_TX should be defined - thanks to Renato!
diff patch and Makefile are attached - sorry for clumsy non-professional style.
The compiled driver if_ixgbe.ko should be placed under /boot/kernel and /boot/loadr.conf.local in my case is
kern.ipc.nmbclusters="524288"
kern.ipc.nmbjumbop="524288"
if_ixgbe_load="YES"The interfaces for VLANs were enabled with script /etc/rc.custom_boot_early
/sbin/ifconfig ix0.172 create
/sbin/ifconfig ix0.5 create
#…
/sbin/ifconfig ix0 up -
hi,
i use my setup (based on vmware esxi) for long time. this setup is stable as rock. currently i have cluster of two firewalls and i don't have any problem with it. i checked different options. i think the problem is not in intel driver, i have freebsd installed on R620 with latest drivers and it's work without errors. pfsense developers added some kernel patch that crash intel driver, but developers even don't try to resolve the problem. :'(
-
Hey 1vg, could you attach the .ko you compiled for us?
Thanks! :)
-
What is status now? :)
-
My setup is working stable. currently no solution from developers (at least i don't know about it)
-
I opened a case with pfsense paid support, the developers are working on the problem.
-
-
Why not use Virtual machines ?
It works very well!
I need at least 5 Gbps throughput, I was not able to get that on ESXi VM. I have not tried xen or kvm though…
-
Why not team them in VmWare??
@1vg:
Why not use Virtual machines ?
It works very well!
I need at least 5 Gbps throughput, I was not able to get that on ESXi VM. I have not tried xen or kvm though…
-
Why not team them in VmWare??
I think I do not understand. I have a physical machine with one 10 Gbit card that should host my firewall (physical or virtual, preferably pfsense) and handle heavy traffic over several VLANs via this card. How can teaming help me in this case?
-
Most run the dual cards when buying them to loadbalance the traffic :)
I thought you had the same.
Do you run real life traffic with 5gbit on the machine? Or did you do an Iperf test?
-
Most run the dual cards when buying them to loadbalance the traffic :)
I thought you had the same.
Do you run real life traffic with 5gbit on the machine? Or did you do an Iperf test?
Yes, we have x520 dual-port card, but no load-balancing, one port faces upstream, multiple VLANs on second port serve local networks.
Yes, we have real 5Gbit traffic that is now bypassing the firewall.