Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Future Release support for Infiniband

    Hardware
    6
    21
    7.0k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • _Adrian__
      _Adrian_
      last edited by

      Hey guys…
      I know there's probably a road map being worked on for the next step of pfSenses "evolution" so to say...

      Any plans on porting RDMA would be a bit of pain in the ass as all the machines on the "network" have to run the same operating system. However IPoIB ( IP over Infiniband ) would be a very easy solution for networking as Infiniband cards are cheaper than 10GB ethernet cards and the 10 GBe Switches are still fairly expensive for an home enviroment and hard to compare to a TopSpin 120 / Cisco SFS7000 at a fraction of the cost.

      FreeBSD had OFED Stack support in v4.4, however its hard to guess what build version will the next evolution will be based on and maybe someone from the Dev Team can chime in. I've posted bounties and even attempted to do it myself but have failed and locked up pf more times than i wished as well as re-installed it numerous times LOL
      After a while i gave up and and sat back hoping that someone else would chime in and/or lend a hand with this.... eventually...

      Any input would be greatly appreciated!

      If it ain't broken, fix it till it is :P

      1 Reply Last reply Reply Quote 0
      • R
        razzfazz
        last edited by

        For a home environment? Seriously?

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          Adrian likes to move data fast!  ;)

          I forget where we got with this last time. It seems that there is at least some support from FreeBSD 9.0 onwards.
          https://wiki.freebsd.org/InfiniBand
          pfSense 2.2 will be built on FreeBSD 10 so it should have some support. The actual card drivers seem to be an issue though.  :-\

          Steve

          Edit: Slightly confusing because it's not listed with other network drivers but it looks like most stuff is there:
          http://svnweb.freebsd.org/base/stable/10/sys/ofed/drivers/

          Edit: Didn't JimP pretty much answer this question for you here?:
          http://forum.pfsense.org/index.php/topic,69856.msg387646.html#msg387646

          1 Reply Last reply Reply Quote 0
          • _Adrian__
            _Adrian_
            last edited by

            @razzfazz:

            For a home environment? Seriously?

            Yup…
            I like to keep the 'backbone' as clean and as fast as possible, plus the cards are cheap enough to drop in and use at a latter date.
            Check the build thread and you can see my setup

            @stephenw10:

            Adrian likes to move data fast!  ;)
            SNIP
            Didn't JimP pretty much answer this question for you

            Steve

            Thanks, I didn't even remember I posted that, but then again i rarely follow my own posts LOL

            If it ain't broken, fix it till it is :P

            1 Reply Last reply Reply Quote 0
            • ?
              A Former User
              last edited by

              This post is deleted!
              1 Reply Last reply Reply Quote 0
              • _Adrian__
                _Adrian_
                last edited by

                @SunCatalyst:

                Adrian,

                im with you, i have quite a bit of this equipment kicking around myself here at home, unfornately its property of where i work or i would
                donate Some of it.

                I got most of this stuff from eBay and slowly upgraded as I went along…
                The expensive part isn't the hardware, but the shipping itself.
                The MDS600 on ebay is $180 which is a bargain as it can hold up to 70 LFF SAS or SATA drives and since its a JBOD chassis you can hook it up to a number of SAS controllers and be happy.
                As of right now with the 4TB Drives on the market it would total 280TB !!!
                Not that you would ever need it, but its easier to scale down rather then running out of room and then try to expand and realize the only way is to pull the smaller drives and replace them with larger ones. My main goal is to run multiple smaller fast drives then a few large and slower ones. Main reason why I'm asking about the Infiniband is want to try and see the throughput difference between the standard 7200 "Performance" drives VS the Hybrid drive VS MDL SAS 10K. Also the is no support for Infiniband on FreeNAS either, however i do have a dual boot on the NAS so I can run either one which is handy :)

                All in all...
                I have under $2000 and that's including the rack, servers, switches, KVM, battery backups and Shipping !!

                If it ain't broken, fix it till it is :P

                1 Reply Last reply Reply Quote 0
                • A
                  Aluminum
                  last edited by

                  @_Adrian_:

                  FreeBSD had OFED Stack support in v4.4, however its hard to guess what build version will the next evolution will be based on and maybe someone from the Dev Team can chime in. I've posted bounties and even attempted to do it myself but have failed and locked up pf more times than i wished as well as re-installed it numerous times LOL
                  After a while i gave up and and sat back hoping that someone else would chime in and/or lend a hand with this…. eventually...

                  Any input would be greatly appreciated!

                  If 10.x has basic drivers for mellanox cards, and has enough IPoIB written to get an ip address, shouldn't that be all you need?
                  No RDMA or other complicated support needed, especially if other systems or a switch takes care of the subnet manager duties.

                  Its not like you need to route 40Gb of traffic at home to consolidate your network, I'm sure you can get away with a switch or even peer-to-peer topology if only 2~4 hosts. (the 32 port QDR switches are amazingly cheap though loud)

                  FWIW I run 40GbE over 56Gb IB FDR at home in a 3 way P2P with dual port cards for fun and less noise. That network is basically just for fast SAN-style traffic for a beefy ZFS box because it was so cheap to do (fleabay) and the internet capable network uses regular GbE nics. I'm sure by the time home internet breaks a gig for more than a town or two FreeBSD will have a lot better native IB support.

                  1 Reply Last reply Reply Quote 0
                  • ?
                    A Former User
                    last edited by

                    This post is deleted!
                    1 Reply Last reply Reply Quote 0
                    • _Adrian__
                      _Adrian_
                      last edited by

                      @Aluminum:

                      the 32 port QDR switches are amazingly cheap though loud

                      Like I stated above, the Top Spin 120 / Cisco SFS7000 is not that bad when you stack it up against a monster quad core, quad processor that's almost constantly encoding video's on the fly…
                      Yes a 3 way would be cheaper, but when I just paid just under $120 for that switch to my door is hard to just go the other route...

                      Plus with a decent card in dual link mode it can do 30GB/s which is more that i will ever need.
                      Mainly I'm after the extremely low latency for server to server data transfer from the NAS, SQL, DNS and AD.

                      If it ain't broken, fix it till it is :P

                      1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        As SunCatalyst suggested, why not grab a FreeBSD 10 ISO and see what works?
                        In his reply to your earlier post JimP said that even though the components necessary were in place in FreeBSD 10 they will likely not make it by default into pfSense 2.2 because they are not part of the default kernel. You may have to use a custom built or custom kernel.

                        Steve

                        1 Reply Last reply Reply Quote 0
                        • _Adrian__
                          _Adrian_
                          last edited by

                          @stephenw10:

                          As SunCatalyst suggested, why not grab a FreeBSD 10 ISO and see what works?
                          In his reply to your earlier post JimP said that even though the components necessary were in place in FreeBSD 10 they will likely not make it by default into pfSense 2.2 because they are not part of the default kernel. You may have to use a custom built or custom kernel.

                          Steve

                          Quick road map from Open Fabrics
                          https://www.openfabrics.org/ofed-for-linux-ofed-for-windows/ofed-overview.html

                          As far as FreeBSD10…
                          I downloaded it last night and i will play with it as i have time.

                          As far as making a custom build or kernel I'm SOL as last i tried it didnt work out so great LOL machine crashed within 3 seconds of boot LOL

                          If it ain't broken, fix it till it is :P

                          1 Reply Last reply Reply Quote 0
                          • _Adrian__
                            _Adrian_
                            last edited by

                            I would like to see if i can get my hands on V2.2 and see what i can make of it…
                            i mean one more person to report back and track bugs on redmine would help after all wouldn't it ?

                            If it ain't broken, fix it till it is :P

                            1 Reply Last reply Reply Quote 0
                            • ?
                              A Former User
                              last edited by

                              This post is deleted!
                              1 Reply Last reply Reply Quote 0
                              • _Adrian__
                                _Adrian_
                                last edited by

                                Really :o
                                https://redmine.pfsense.org/projects/pfsense/issues?query_id=30

                                https://redmine.pfsense.org/projects/pfsense/issues?query_id=31

                                @jimp:

                                The builder has produced an image, and it's seen some very limited internal testing, but we have not yet set things up for public consumption or automated snapshots

                                Well i guess in that case they are testing ghosts…

                                If it ain't broken, fix it till it is :P

                                1 Reply Last reply Reply Quote 0
                                • 3
                                  33_viper_33
                                  last edited by

                                  Just found this thread and am saving it.  I also am looking for IB support in pfsense.  I would like to use pfsense as a bridge or at least hang a virtual function onto pfsense from ESXi.  This would allow for a small 2-3 system IB network bridged to 10 and 1 GBE networks.  One other concern I have IF we get this working is throughput.  How fast can pfsense move data through it across bridged NICs?

                                  1 Reply Last reply Reply Quote 0
                                  • _Adrian__
                                    _Adrian_
                                    last edited by

                                    Looks like the boys in the FreeNAS community are facing the same issues as we are…
                                    https://bugs.freenas.org/issues/2014

                                    One way or another…
                                    2 heads are better than 1, just like 4 eyes are still better than 2!

                                    Seems like the pressure for Infiniband to come to more "prosumer" users that want more bandwidth with less latency.
                                    Also in the end the main factor is the fact that these cards are becoming obsolete and being pulled out of service and flooding the consumer market at ridiculously low prices compared to the current 10GB network cards !!

                                    The best guess is to "build" a custom working version with the OFED stack and then submit it for the PF team and to the NAS teams so they can work their magic...
                                    Off the to BSD forums to do some research and see if there's some peeps that are playing around with it...

                                    If it ain't broken, fix it till it is :P

                                    1 Reply Last reply Reply Quote 0
                                    • _Adrian__
                                      _Adrian_
                                      last edited by

                                      Well seems like i found an old thread of mine… and some success is found and in V9.0 at that!
                                      im gonna grab my stick and poke at the beehive and see what i get LOL

                                      https://forums.freebsd.org/viewtopic.php?f=7&t=28792

                                      If it ain't broken, fix it till it is :P

                                      1 Reply Last reply Reply Quote 0
                                      • 3
                                        33_viper_33
                                        last edited by

                                        what cards are you on these days?  Are you direct connecting or going through a switch?  Are you bridging your LAN with infiniband? If so, how?

                                        1 Reply Last reply Reply Quote 0
                                        • _Adrian__
                                          _Adrian_
                                          last edited by

                                          @33_viper_33:

                                          what cards are you on these days?  Are you direct connecting or going through a switch?  Are you bridging your LAN with infiniband? If so, how?

                                          5x Mellanox InfiniHost III Ex ( MHGA28-XTC http://cecar.fcen.uba.ar/pdf/MHEA28.pdf ) tied with dual links to a TopSpin 120 ( http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/topic/iphau_p5/tpspn120hg.pdf )
                                          Keep in mind these are DDR cards running on a SDR switch… However... I do remember seeing somewhere that it can do 8 port DDR Switching
                                          I'm not planning running IPoIB but RDMA instead.

                                          I have a Myricom 10G network card on its way and that's going to be tied into my LB4 as a trunk for my home network.

                                          If it ain't broken, fix it till it is :P

                                          1 Reply Last reply Reply Quote 0
                                          • A
                                            Aluminum
                                            last edited by

                                            @_Adrian_:

                                            @Aluminum:

                                            the 32 port QDR switches are amazingly cheap though loud

                                            Like I stated above, the Top Spin 120 / Cisco SFS7000 is not that bad when you stack it up against a monster quad core, quad processor that's almost constantly encoding video's on the fly…
                                            Yes a 3 way would be cheaper, but when I just paid just under $120 for that switch to my door is hard to just go the other route...

                                            Plus with a decent card in dual link mode it can do 30GB/s which is more that i will ever need.
                                            Mainly I'm after the extremely low latency for server to server data transfer from the NAS, SQL, DNS and AD.

                                            Take a look at Noctua and similar quality air coolers, they keep my socket 1366 and 2011 boxes as quiet as any normal desktop no matter what they're doing. By that standard I know those switches ain't quiet :)
                                            GPUs under heavy load are my loudest item by far, but I game with headphones so don't care.

                                            I tried some AIO water kits but frankly their pumps have a different more annoying sound profile and was underwhelmed by the performance for the price. The bundled fans tend to suck too. The real deal killer is that the failure mode on them makes me nervous, risk of water leak and cooling capacity goes to near 0 almost instantly. A failed fan on a giant slab of metal just means cpu is throttled when not idle, some people even use big air for fanless cooling on the lower TDP cpus.

                                            I got 2 FDR (56 IB/40 GbE) + 1 EN (40 GbE only) which are all dual port and pci 3.0 along with several 100' fiber links for <$900. Still cheaper than the copper X540-T2 10GbE buildout I was considering, yet faster :) Switches are currently cheaper too if I decide to go that route. They all can use SFP+ 10GbE modules with a QSFP plug adapter as well. They might(?) even be able to fan out to quad 10GbE links but not sure how that works.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.