Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Mellanox Support in 2.1

    Scheduled Pinned Locked Moved 2.1 Snapshot Feedback and Problems - RETIRED
    6 Posts 3 Posters 2.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • _Adrian__
      _Adrian_
      last edited by

      Is there any as of yet ???

      If it ain't broken, fix it till it is :P

      1 Reply Last reply Reply Quote 0
      • C
        cmb
        last edited by

        we support everything that's supported in FreeBSD 8.3, not sure whether Mellanox is.

        1 Reply Last reply Reply Quote 0
        • _Adrian__
          _Adrian_
          last edited by

          I posted a while in the BSD Forums and they said there were working on it at V8.1.

          I haven't checked back since then, but this is what i found:

          http://forums.freebsd.org/showthread.php?t=17774

          If it ain't broken, fix it till it is :P

          1 Reply Last reply Reply Quote 0
          • R
            RootWyrm
            last edited by

            @_Adrian_:

            I posted a while in the BSD Forums and they said there were working on it at V8.1.

            I haven't checked back since then, but this is what i found:

            http://forums.freebsd.org/showthread.php?t=17774

            It's a bit more complicated than that. It's a two part puzzle. Part one is the Mellanox InfiniBand HCA driver. Part two is adding IPoIB - InfiniBand doesn't 'natively' support IP. Unfortunately, they decided to do it as OFED compat with wrappers rather than a proper port. (So yes, all the bad Linux bits are still there.) That went into 9.0 and not 8.3 and requires OFED support in the kernel. So, yes, custom kernel. And the documentation is nonexistent.

            My recommendation in all honesty is to use a pair of Intel 10GbE's in an LACP configuration with ixgb(4) or ixgbe(4). At this point in time, it will be faster anyhow.

            1 Reply Last reply Reply Quote 0
            • _Adrian__
              _Adrian_
              last edited by

              Thanks for the input…
              Sadly i have all the hardware in the other servers along with the switches ( IBM Top Spin 120 and Woven Systems TRX-100 )

              Just ordered a few more cables, I want to see this set up and rolling.

              If it ain't broken, fix it till it is :P

              1 Reply Last reply Reply Quote 0
              • _Adrian__
                _Adrian_
                last edited by

                Also found out that support is in place in V9.0

                If it ain't broken, fix it till it is :P

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post
                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.