Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    QLogic Fiber Channel Card

    Scheduled Pinned Locked Moved Off-Topic & Non-Support Discussion
    4 Posts 2 Posters 5.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • 3
      33_viper_33
      last edited by

      All,

      This may be a bone-headed question.  I picked up a used QLogic Fiber Channel Card thinking I could use it to connect to my dell 5324 switch.  The plan was to use it as the main link between my windows 2011 server and the switch.  My server hosts a VM of PFSense (please don't tell me how bad this is, I need low power).  After installing the card I checked to make sure the driver was properly installed and was surprised what i found.  The adapter showed up as a storage controller.  Now I know a whopping nothing about SAN but am assuming that this is what the adapter is for.  The adapter showed up under device manager as a QL2300.  Is there any way to make my hopes and dreams come true and have a fiber up-link to my switch or do I need to start over with a different adapter?

      -V

      1 Reply Last reply Reply Quote 0
      • stephenw10S
        stephenw10 Netgate Administrator
        last edited by

        I investigated this some time ago but I can't recall the details. You need to use IP over Fibre Channel to do this and the FreeBSD driver doesn't support it so I stopped investigating at that point. However as you are running under Windows you may be able to do something. Qlogic seem to have a Windows driver along with a guide:
        http://filedownloads.qlogic.com/files/driver/71212/README_QLA2xNDIS_Windows.pdf

        I don't know whether you would need a switch that supports IPoFC, but since FC is a protocol I suspect you would. :-\

        Steve

        1 Reply Last reply Reply Quote 0
        • 3
          33_viper_33
          last edited by

          Thanks for the reply Steve!

          Unfortunately I've stopped working this issue for the moment due to issues with the switch side.  I can't find SFP transceivers compatible with my dell powerconnect 5224.  The biggest problem is I'm not sure what transceivers to use.  Dell no longer supports the switch nor did they know what the original manufacture's part number was.  I have two different types of finisar SFP modules and neither are compatible.  One is a FTLF8519P2BNL-(N1).  The other is a FTRJ8519P1BNL.

          I'm not even sure its worth working this issue any further.  I picked up an Intel dual port gigabit NIC and have link aggregation working.  Its now a 2Gbe connection which is faster than the optical would be.  I'm also watching the 10Gbe NICs coming down in price.  I may make a 10Gbe switch out of an old computer running PFSense for the servers and primary computers and use the switch for rest of the infrastructure.  I think I could call it quits with 10Gbe ;D  I'm fairly certain my servers couldn't max that while running backups.  I'm just worried that any compute I use for a 10Gbe switch would be speed crippled by the PCIe bus.  Switches are designed to handle that traffic.  A computer isn't.  However, this is a topic for a different thread.

          -V

          1 Reply Last reply Reply Quote 0
          • stephenw10S
            stephenw10 Netgate Administrator
            last edited by

            Sounds like a good move. I don't think you would ever get that working as you wanted originally. You would need a switch that is able to talk FC but also extract the IP from IPoFC. Does such a switch exist?
            It looks as though the IPoFC protocol was put in place to allow high bandwidth between servers that are already on a FC network and not as a bridge between FC and Ethernet.

            If you were using pfSense to bridge 10Gbe interfaces then the limiting factor would be in software not the PCI-E bus. The bandwidth of PCIe varies by how many lanes what generation etc but say a x8 card in a v2 slot could, theoretically, support 32Gbps. However the limit in pfSense is, I believe, currently around 4Gbps. This is due to a single giant locked process coupled with a the maximum single core speed of current cpus.
            That is my limited understanding but I could easily be wrong!  ::)

            Steve

            1 Reply Last reply Reply Quote 0
            • First post
              Last post
            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.