Future Release support for Infiniband
-
For a home environment? Seriously?
Yup…
I like to keep the 'backbone' as clean and as fast as possible, plus the cards are cheap enough to drop in and use at a latter date.
Check the build thread and you can see my setupAdrian likes to move data fast! ;)
SNIP
Didn't JimP pretty much answer this question for youSteve
Thanks, I didn't even remember I posted that, but then again i rarely follow my own posts LOL
-
This post is deleted! -
@SunCatalyst:
Adrian,
im with you, i have quite a bit of this equipment kicking around myself here at home, unfornately its property of where i work or i would
donate Some of it.I got most of this stuff from eBay and slowly upgraded as I went along…
The expensive part isn't the hardware, but the shipping itself.
The MDS600 on ebay is $180 which is a bargain as it can hold up to 70 LFF SAS or SATA drives and since its a JBOD chassis you can hook it up to a number of SAS controllers and be happy.
As of right now with the 4TB Drives on the market it would total 280TB !!!
Not that you would ever need it, but its easier to scale down rather then running out of room and then try to expand and realize the only way is to pull the smaller drives and replace them with larger ones. My main goal is to run multiple smaller fast drives then a few large and slower ones. Main reason why I'm asking about the Infiniband is want to try and see the throughput difference between the standard 7200 "Performance" drives VS the Hybrid drive VS MDL SAS 10K. Also the is no support for Infiniband on FreeNAS either, however i do have a dual boot on the NAS so I can run either one which is handy :)All in all...
I have under $2000 and that's including the rack, servers, switches, KVM, battery backups and Shipping !! -
FreeBSD had OFED Stack support in v4.4, however its hard to guess what build version will the next evolution will be based on and maybe someone from the Dev Team can chime in. I've posted bounties and even attempted to do it myself but have failed and locked up pf more times than i wished as well as re-installed it numerous times LOL
After a while i gave up and and sat back hoping that someone else would chime in and/or lend a hand with this…. eventually...Any input would be greatly appreciated!
If 10.x has basic drivers for mellanox cards, and has enough IPoIB written to get an ip address, shouldn't that be all you need?
No RDMA or other complicated support needed, especially if other systems or a switch takes care of the subnet manager duties.Its not like you need to route 40Gb of traffic at home to consolidate your network, I'm sure you can get away with a switch or even peer-to-peer topology if only 2~4 hosts. (the 32 port QDR switches are amazingly cheap though loud)
FWIW I run 40GbE over 56Gb IB FDR at home in a 3 way P2P with dual port cards for fun and less noise. That network is basically just for fast SAN-style traffic for a beefy ZFS box because it was so cheap to do (fleabay) and the internet capable network uses regular GbE nics. I'm sure by the time home internet breaks a gig for more than a town or two FreeBSD will have a lot better native IB support.
-
This post is deleted! -
the 32 port QDR switches are amazingly cheap though loud
Like I stated above, the Top Spin 120 / Cisco SFS7000 is not that bad when you stack it up against a monster quad core, quad processor that's almost constantly encoding video's on the fly…
Yes a 3 way would be cheaper, but when I just paid just under $120 for that switch to my door is hard to just go the other route...Plus with a decent card in dual link mode it can do 30GB/s which is more that i will ever need.
Mainly I'm after the extremely low latency for server to server data transfer from the NAS, SQL, DNS and AD. -
As SunCatalyst suggested, why not grab a FreeBSD 10 ISO and see what works?
In his reply to your earlier post JimP said that even though the components necessary were in place in FreeBSD 10 they will likely not make it by default into pfSense 2.2 because they are not part of the default kernel. You may have to use a custom built or custom kernel.Steve
-
As SunCatalyst suggested, why not grab a FreeBSD 10 ISO and see what works?
In his reply to your earlier post JimP said that even though the components necessary were in place in FreeBSD 10 they will likely not make it by default into pfSense 2.2 because they are not part of the default kernel. You may have to use a custom built or custom kernel.Steve
Quick road map from Open Fabrics
https://www.openfabrics.org/ofed-for-linux-ofed-for-windows/ofed-overview.htmlAs far as FreeBSD10…
I downloaded it last night and i will play with it as i have time.As far as making a custom build or kernel I'm SOL as last i tried it didnt work out so great LOL machine crashed within 3 seconds of boot LOL
-
I would like to see if i can get my hands on V2.2 and see what i can make of it…
i mean one more person to report back and track bugs on redmine would help after all wouldn't it ? -
This post is deleted! -
Really :o
https://redmine.pfsense.org/projects/pfsense/issues?query_id=30https://redmine.pfsense.org/projects/pfsense/issues?query_id=31
The builder has produced an image, and it's seen some very limited internal testing, but we have not yet set things up for public consumption or automated snapshots
Well i guess in that case they are testing ghosts…
-
Just found this thread and am saving it. I also am looking for IB support in pfsense. I would like to use pfsense as a bridge or at least hang a virtual function onto pfsense from ESXi. This would allow for a small 2-3 system IB network bridged to 10 and 1 GBE networks. One other concern I have IF we get this working is throughput. How fast can pfsense move data through it across bridged NICs?
-
Looks like the boys in the FreeNAS community are facing the same issues as we are…
https://bugs.freenas.org/issues/2014One way or another…
2 heads are better than 1, just like 4 eyes are still better than 2!Seems like the pressure for Infiniband to come to more "prosumer" users that want more bandwidth with less latency.
Also in the end the main factor is the fact that these cards are becoming obsolete and being pulled out of service and flooding the consumer market at ridiculously low prices compared to the current 10GB network cards !!The best guess is to "build" a custom working version with the OFED stack and then submit it for the PF team and to the NAS teams so they can work their magic...
Off the to BSD forums to do some research and see if there's some peeps that are playing around with it... -
Well seems like i found an old thread of mine… and some success is found and in V9.0 at that!
im gonna grab my stick and poke at the beehive and see what i get LOL -
what cards are you on these days? Are you direct connecting or going through a switch? Are you bridging your LAN with infiniband? If so, how?
-
what cards are you on these days? Are you direct connecting or going through a switch? Are you bridging your LAN with infiniband? If so, how?
5x Mellanox InfiniHost III Ex ( MHGA28-XTC http://cecar.fcen.uba.ar/pdf/MHEA28.pdf ) tied with dual links to a TopSpin 120 ( http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/topic/iphau_p5/tpspn120hg.pdf )
Keep in mind these are DDR cards running on a SDR switch… However... I do remember seeing somewhere that it can do 8 port DDR Switching
I'm not planning running IPoIB but RDMA instead.I have a Myricom 10G network card on its way and that's going to be tied into my LB4 as a trunk for my home network.
-
the 32 port QDR switches are amazingly cheap though loud
Like I stated above, the Top Spin 120 / Cisco SFS7000 is not that bad when you stack it up against a monster quad core, quad processor that's almost constantly encoding video's on the fly…
Yes a 3 way would be cheaper, but when I just paid just under $120 for that switch to my door is hard to just go the other route...Plus with a decent card in dual link mode it can do 30GB/s which is more that i will ever need.
Mainly I'm after the extremely low latency for server to server data transfer from the NAS, SQL, DNS and AD.Take a look at Noctua and similar quality air coolers, they keep my socket 1366 and 2011 boxes as quiet as any normal desktop no matter what they're doing. By that standard I know those switches ain't quiet :)
GPUs under heavy load are my loudest item by far, but I game with headphones so don't care.I tried some AIO water kits but frankly their pumps have a different more annoying sound profile and was underwhelmed by the performance for the price. The bundled fans tend to suck too. The real deal killer is that the failure mode on them makes me nervous, risk of water leak and cooling capacity goes to near 0 almost instantly. A failed fan on a giant slab of metal just means cpu is throttled when not idle, some people even use big air for fanless cooling on the lower TDP cpus.
I got 2 FDR (56 IB/40 GbE) + 1 EN (40 GbE only) which are all dual port and pci 3.0 along with several 100' fiber links for <$900. Still cheaper than the copper X540-T2 10GbE buildout I was considering, yet faster :) Switches are currently cheaper too if I decide to go that route. They all can use SFP+ 10GbE modules with a QSFP plug adapter as well. They might(?) even be able to fan out to quad 10GbE links but not sure how that works.
-
the 32 port QDR switches are amazingly cheap though loud
I got 2 FDR (56 IB/40 GbE) + 1 EN (40 GbE only) which are all dual port and pci 3.0 along with several 100' fiber links for <$900. Still cheaper than the copper X540-T2 10GbE buildout I was considering, yet faster :) Switches are currently cheaper too if I decide to go that route. They all can use SFP+ 10GbE modules with a QSFP plug adapter as well. They might(?) even be able to fan out to quad 10GbE links but not sure how that works.
Sounds close to my setup. I'm running 3 ConnectX-2 VPI cards. I'm considering a DDR switch, but the noise and power consumption is unwelcome. In addition, I still need to figure out how to bridge my ethernet in some logical way which those switches will not aid in. I would like to see pfsense utilized for that bridge, but its just not there nor does it seem to be on the minds of the programers. I'm still learning infiniband, so I'm not sure how much help I will be other than to test configurations.