Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Intel X520-DA2, kernel: CRITICAL: ECC ERROR!! Please Reboot!!

    Scheduled Pinned Locked Moved Hardware
    60 Posts 13 Posters 29.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • W
      wladikz
      last edited by

      yes. it's possible just if i use pci pass-through. i try to check option with centos + kvm + openvswitch as hypervisor

      1 Reply Last reply Reply Quote 0
      • W
        wladikz
        last edited by

        hi all,

        for now one stable solution that i found is using ESXi. Any other options, that i checked, not provides stable 10Gb connectivity. so I'll try to write guide to create same setup as i have, or you can use it to create your variants. i'm sorry for my english.

        so, first is my hardware configuration:
        two Dell R620 servers with:
          - quad core 3.3 GHz CPU
          - 32GB ram
          - X540 DP+I350DP on board and X520 DA2+ network adapters
          - two SAS 146GB HDD in RAID1

        I used VMWare ESXi 5.1 Dell customized ISO (it's includes all required drivers)

        Step 1 (ESXi Install):
        Install ESXi and configure 1 network interface for management access (i used one 1Gb port)
        connect to ESXi using VSphere client and lets configure networking.

        Step 2 (Networking configuration):
        create three additional virtual switches (WAN, LAN, Interconnect). i used following interfaces:

        • 2 ports X520 for WAN
        • 2 ports X540 for LAN
        • 1 Port I350 for interconnect
          If you need VLANs in Firewall configure promiscuous mode on Virtual Port Group (VPG) and not on switch.
          For VLAN trunking use VLAN 4095 on Virtual Port Group. Don't forget about vmware  limitation of 10 NICs per VM.

        Step 3 (VM Creation)
        Create VM (Freebsd 64bit) with following configuration:

        • 2 Sockets x 4 cores
        • 6GB or more RAM
        • 30GB or more HDD
        • one E1000 network card connected to VPG with internet access ( for vmware guest tool install)

        Step 4 (PFSense setup)

        • Start VM, connect PFSense iso to it and install pfsense as usual.
        • configure internet access. (WAN only configuration)
        • enable SSH access (This is not really necessary for VMware Tools, but will let you easily use cut-and-paste into a putty terminal window for the following shell commands)
        • Install vmware tools using http://www.ataru.co/computing/linux/pfsense-2-1-install-vmware-tools/
        • shutdown pfsense.

        Now you can add VXNET3 interfaces and you will have 10Gb connections. Change in your configuration backup file NIC names and remove lagg interfaces

        this setup not supports LAGGs, but you don't need it if you use it for failover, failover between interfaces will be done by ESXi.

        P.S. if you find any mistakes, write me and i'll update this guide.

        1 Reply Last reply Reply Quote 0
        • 1
          1vg
          last edited by

          Hi wladikz, thanks for the detailed instruction, can you please update the link?
          Thank you,
          – Ivan

          @wladikz:

          • Install vmware tools using http://www.ataru.co/computing/linux/pfsense-2-1-install-vmware-tools/
          1 Reply Last reply Reply Quote 0
          • 1
            1vg
            last edited by

            Hi Renato,

            The modules do not work, but your hint made the trick!

            @Renato:

            I built versions >= 2.5.13 with IXGBE_LEGACY_TX option defined to make it work with ALTQ.

            Thanks a lot!
            –
            Ivan

            1 Reply Last reply Reply Quote 0
            • 1
              1vg
              last edited by

              We have dell R260 with Intel x520 on-board.
              We got 10Gbit link configuration with VLANs, iperf reports 9.4 Gbit/s.
              Standard fpsense 2.1 suppied with ixgbe-2.5.15 did not work with ECC error.
              The latest driver from November 5 from FB10 source https://github.com/freebsd/freebsd/tree/1440b0c5298e57d592534c87f2ccff9841c4db42/sys/dev/ixgbe addresses the VLAN issue, but requires some patching to get compiled under FB8.3.
              Also IXGBE_LEGACY_TX should be defined - thanks to Renato!
              diff patch and Makefile are attached - sorry for clumsy non-professional style.
              The compiled driver if_ixgbe.ko should be placed under /boot/kernel and /boot/loadr.conf.local in my case is
              kern.ipc.nmbclusters="524288"
              kern.ipc.nmbjumbop="524288"
              if_ixgbe_load="YES"

              The interfaces for VLANs were enabled with script /etc/rc.custom_boot_early
              /sbin/ifconfig ix0.172 create
              /sbin/ifconfig ix0.5 create
              #…
              /sbin/ifconfig ix0 up

              ixgbe_2.5.15_pfsense_2013-11-26_diff.txt
              Makefile.txt

              1 Reply Last reply Reply Quote 0
              • W
                wladikz
                last edited by

                hi,

                i use my setup (based on vmware esxi) for long time. this setup is stable as rock. currently i have cluster of two firewalls and i don't have any problem with it. i checked different options. i think the problem is not in intel driver, i have freebsd installed on R620 with latest drivers and it's work without errors. pfsense developers added some kernel patch that crash intel driver, but developers even don't try to resolve the problem.  :'(

                1 Reply Last reply Reply Quote 0
                • A
                  ancientz
                  last edited by

                  Hey 1vg, could you attach the .ko you compiled for us?

                  Thanks! :)

                  1 Reply Last reply Reply Quote 0
                  • L
                    lowprofile
                    last edited by

                    What is status now?  :)

                    1 Reply Last reply Reply Quote 0
                    • W
                      wladikz
                      last edited by

                      My setup is working stable. currently no solution from developers (at least i don't know about it)

                      1 Reply Last reply Reply Quote 0
                      • 1
                        1vg
                        last edited by

                        I opened a case with pfsense paid support, the developers are working on the problem.

                        1 Reply Last reply Reply Quote 0
                        • S
                          Supermule Banned
                          last edited by

                          Why not use Virtual machines ?

                          It works very well!

                          @lowprofile:

                          What is status now?  :)

                          1 Reply Last reply Reply Quote 0
                          • 1
                            1vg
                            last edited by

                            @Supermule:

                            Why not use Virtual machines ?

                            It works very well!

                            I need at least 5 Gbps throughput, I was not able to get that on ESXi VM. I have not tried xen or kvm though…

                            1 Reply Last reply Reply Quote 0
                            • S
                              Supermule Banned
                              last edited by

                              Why not team them in VmWare??

                              @1vg:

                              @Supermule:

                              Why not use Virtual machines ?

                              It works very well!

                              I need at least 5 Gbps throughput, I was not able to get that on ESXi VM. I have not tried xen or kvm though…

                              1 Reply Last reply Reply Quote 0
                              • 1
                                1vg
                                last edited by

                                @Supermule:

                                Why not team them in VmWare??

                                I think I do not understand. I have a physical machine with one 10 Gbit card that should host my firewall (physical or virtual, preferably pfsense) and handle heavy traffic over several VLANs via this card. How can teaming help me in this case?

                                1 Reply Last reply Reply Quote 0
                                • S
                                  Supermule Banned
                                  last edited by

                                  Most run the dual cards when buying them to loadbalance the traffic :)

                                  I thought you had the same.

                                  Do you run real life traffic with 5gbit on the machine? Or did you do an Iperf test?

                                  1 Reply Last reply Reply Quote 0
                                  • 1
                                    1vg
                                    last edited by

                                    @Supermule:

                                    Most run the dual cards when buying them to loadbalance the traffic :)

                                    I thought you had the same.

                                    Do you run real life traffic with 5gbit on the machine? Or did you do an Iperf test?

                                    Yes, we have x520 dual-port card, but no load-balancing, one port faces upstream, multiple VLANs on second port serve local networks.
                                    Yes, we have real 5Gbit traffic that is now bypassing the firewall.

                                    1 Reply Last reply Reply Quote 0
                                    • L
                                      lowprofile
                                      last edited by

                                      @Supermule:

                                      Why not use Virtual machines ?

                                      It works very well!

                                      @lowprofile:

                                      What is status now?  :)

                                      I would like to run 1 physical box and second as virtual. Redundant setup.  :)

                                      1 Reply Last reply Reply Quote 0
                                      • W
                                        wladikz
                                        last edited by

                                        I run 2 vmware esxi servers with 2 pfsense firewalls. it's transfer around 7Gb/s real traffic, 132 VLANs and ~160 Subnets. it's work without any problems. i think that we should use this setup until developers will fix ixgbe driver problem.

                                        1 Reply Last reply Reply Quote 0
                                        • rbgargaR
                                          rbgarga Developer Netgate Administrator
                                          last edited by

                                          2.1.1-release snapshots are now available with new ixgbe driver. - https://forum.pfsense.org/index.php/topic,71546.0.html

                                          Renato Botelho

                                          1 Reply Last reply Reply Quote 0
                                          • W
                                            wladikz
                                            last edited by

                                            I'm waiting for snapshot server online. i took one of my FW out from cluster to test changes. will update after test

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.