Future Release support for Infiniband
-
the 32 port QDR switches are amazingly cheap though loud
Like I stated above, the Top Spin 120 / Cisco SFS7000 is not that bad when you stack it up against a monster quad core, quad processor that's almost constantly encoding video's on the fly…
Yes a 3 way would be cheaper, but when I just paid just under $120 for that switch to my door is hard to just go the other route...Plus with a decent card in dual link mode it can do 30GB/s which is more that i will ever need.
Mainly I'm after the extremely low latency for server to server data transfer from the NAS, SQL, DNS and AD. -
As SunCatalyst suggested, why not grab a FreeBSD 10 ISO and see what works?
In his reply to your earlier post JimP said that even though the components necessary were in place in FreeBSD 10 they will likely not make it by default into pfSense 2.2 because they are not part of the default kernel. You may have to use a custom built or custom kernel.Steve
-
As SunCatalyst suggested, why not grab a FreeBSD 10 ISO and see what works?
In his reply to your earlier post JimP said that even though the components necessary were in place in FreeBSD 10 they will likely not make it by default into pfSense 2.2 because they are not part of the default kernel. You may have to use a custom built or custom kernel.Steve
Quick road map from Open Fabrics
https://www.openfabrics.org/ofed-for-linux-ofed-for-windows/ofed-overview.htmlAs far as FreeBSD10…
I downloaded it last night and i will play with it as i have time.As far as making a custom build or kernel I'm SOL as last i tried it didnt work out so great LOL machine crashed within 3 seconds of boot LOL
-
I would like to see if i can get my hands on V2.2 and see what i can make of it…
i mean one more person to report back and track bugs on redmine would help after all wouldn't it ? -
This post is deleted! -
Really :o
https://redmine.pfsense.org/projects/pfsense/issues?query_id=30https://redmine.pfsense.org/projects/pfsense/issues?query_id=31
The builder has produced an image, and it's seen some very limited internal testing, but we have not yet set things up for public consumption or automated snapshots
Well i guess in that case they are testing ghosts…
-
Just found this thread and am saving it. I also am looking for IB support in pfsense. I would like to use pfsense as a bridge or at least hang a virtual function onto pfsense from ESXi. This would allow for a small 2-3 system IB network bridged to 10 and 1 GBE networks. One other concern I have IF we get this working is throughput. How fast can pfsense move data through it across bridged NICs?
-
Looks like the boys in the FreeNAS community are facing the same issues as we are…
https://bugs.freenas.org/issues/2014One way or another…
2 heads are better than 1, just like 4 eyes are still better than 2!Seems like the pressure for Infiniband to come to more "prosumer" users that want more bandwidth with less latency.
Also in the end the main factor is the fact that these cards are becoming obsolete and being pulled out of service and flooding the consumer market at ridiculously low prices compared to the current 10GB network cards !!The best guess is to "build" a custom working version with the OFED stack and then submit it for the PF team and to the NAS teams so they can work their magic...
Off the to BSD forums to do some research and see if there's some peeps that are playing around with it... -
Well seems like i found an old thread of mine… and some success is found and in V9.0 at that!
im gonna grab my stick and poke at the beehive and see what i get LOL -
what cards are you on these days? Are you direct connecting or going through a switch? Are you bridging your LAN with infiniband? If so, how?
-
what cards are you on these days? Are you direct connecting or going through a switch? Are you bridging your LAN with infiniband? If so, how?
5x Mellanox InfiniHost III Ex ( MHGA28-XTC http://cecar.fcen.uba.ar/pdf/MHEA28.pdf ) tied with dual links to a TopSpin 120 ( http://pic.dhe.ibm.com/infocenter/powersys/v3r1m5/topic/iphau_p5/tpspn120hg.pdf )
Keep in mind these are DDR cards running on a SDR switch… However... I do remember seeing somewhere that it can do 8 port DDR Switching
I'm not planning running IPoIB but RDMA instead.I have a Myricom 10G network card on its way and that's going to be tied into my LB4 as a trunk for my home network.
-
the 32 port QDR switches are amazingly cheap though loud
Like I stated above, the Top Spin 120 / Cisco SFS7000 is not that bad when you stack it up against a monster quad core, quad processor that's almost constantly encoding video's on the fly…
Yes a 3 way would be cheaper, but when I just paid just under $120 for that switch to my door is hard to just go the other route...Plus with a decent card in dual link mode it can do 30GB/s which is more that i will ever need.
Mainly I'm after the extremely low latency for server to server data transfer from the NAS, SQL, DNS and AD.Take a look at Noctua and similar quality air coolers, they keep my socket 1366 and 2011 boxes as quiet as any normal desktop no matter what they're doing. By that standard I know those switches ain't quiet :)
GPUs under heavy load are my loudest item by far, but I game with headphones so don't care.I tried some AIO water kits but frankly their pumps have a different more annoying sound profile and was underwhelmed by the performance for the price. The bundled fans tend to suck too. The real deal killer is that the failure mode on them makes me nervous, risk of water leak and cooling capacity goes to near 0 almost instantly. A failed fan on a giant slab of metal just means cpu is throttled when not idle, some people even use big air for fanless cooling on the lower TDP cpus.
I got 2 FDR (56 IB/40 GbE) + 1 EN (40 GbE only) which are all dual port and pci 3.0 along with several 100' fiber links for <$900. Still cheaper than the copper X540-T2 10GbE buildout I was considering, yet faster :) Switches are currently cheaper too if I decide to go that route. They all can use SFP+ 10GbE modules with a QSFP plug adapter as well. They might(?) even be able to fan out to quad 10GbE links but not sure how that works.
-
the 32 port QDR switches are amazingly cheap though loud
I got 2 FDR (56 IB/40 GbE) + 1 EN (40 GbE only) which are all dual port and pci 3.0 along with several 100' fiber links for <$900. Still cheaper than the copper X540-T2 10GbE buildout I was considering, yet faster :) Switches are currently cheaper too if I decide to go that route. They all can use SFP+ 10GbE modules with a QSFP plug adapter as well. They might(?) even be able to fan out to quad 10GbE links but not sure how that works.
Sounds close to my setup. I'm running 3 ConnectX-2 VPI cards. I'm considering a DDR switch, but the noise and power consumption is unwelcome. In addition, I still need to figure out how to bridge my ethernet in some logical way which those switches will not aid in. I would like to see pfsense utilized for that bridge, but its just not there nor does it seem to be on the minds of the programers. I'm still learning infiniband, so I'm not sure how much help I will be other than to test configurations.