Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    HP T5740 gigabit over PCIx

    Scheduled Pinned Locked Moved Hardware
    26 Posts 4 Posters 12.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      Since the expansion module has PCIe capability you might consider just using a PCIe NIC instead. Obviously you've already got the quad port PCI-X card so that would be additional cost. You might as well try the PCI-X NIC and see how it goes. Like you said it should work fine in a PCI slot just with limited bandwidth. That shouldn't be much of an issue either since the Atom in that box won't do much more than 500Mbps anyway.

      Steve

      Edit: Typo

      1 Reply Last reply Reply Quote 0
      • W
        w1ll1am
        last edited by

        @stephenw10:

        Since the expansion module has PCIe capability you might consider just using a PCIe NIC instead. Obviously you've already got the quad port PCI-X card so that would be additional cost. You might as well try the PCI-X NIC and see how it goes. Like you said it should work fine in a PCI slot just with limited bandwidth. That should be much of an issue either since the Atom in that box won't do much more than 500Mbps anyway.

        Steve

        The dual port PCIe NIC biggsy mentioned is only $25ish on ebay so I will look into that. I got the 4 port that I have now for like $12 so it isn't that big of a deal.

        The two cards that biggsy mentioned HP NC360T (2-port) or NC364T (4-port) are not in the freeBSD hardware list should I be concerned about that?

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          There are many cards that are supported by virtue of having a supported chipset but aren't mentioned specifically. Check the forum, someone will have tried it.

          Steve

          1 Reply Last reply Reply Quote 0
          • W
            w1ll1am
            last edited by

            @stephenw10:

            There are many cards that are supported by virtue of having a supported chipset but aren't mentioned specifically. Check the forum, someone will have tried it.

            Steve

            Okay great thanks. Like I said I will post back with details and pictures after the build. Thanks for all the help.

            1 Reply Last reply Reply Quote 0
            • B
              biggsy
              last edited by

              Ah!  I didn't get there were both PCIe and PCI-X risers.

              I agree with Steve - given that you have the IBM card, try that first.

              I'm pretty sure the HP NC36xT cards are rebranded Intel cards.  They use an 82571EB chipset in the NC360 and 2 x  82571GB in the NC364.

              1 Reply Last reply Reply Quote 0
              • B
                biggsy
                last edited by

                For some reason this project got me interested and I did a bit of research today.

                Your IBM card is a 3.3V 64-bit PCI-X.  The riser with the white connector is a 5V 32-bit PCI .

                So I don't think you'll have much joy there, unless you feel like going down this path.  Of course, you could just end up frying the card.

                Overall the PCIe riser and dual-port NC360T might be easier and quicker.

                Good luck and please let us know the outcome.

                1 Reply Last reply Reply Quote 0
                • stephenw10S
                  stephenw10 Netgate Administrator
                  last edited by

                  Ah, of course. I always forget about the voltage!  ::) I guess it's so infrequently something that you have to worry about these days. Filing out the 3.3V notch seems pretty extreme to me. In the worst case you could damage both the card and the motherboard doing that.

                  Steve

                  1 Reply Last reply Reply Quote 0
                  • W
                    w1ll1am
                    last edited by

                    Well lol I got everything over the weekend and I did cut out the notch. That had been my plan the entire time never even thought about it really causing a problem. So unfortunately I don't have pictures at the moment but I will try to get some today. It was an extremely tight fit. I was able to get the HDD (XBox 360 20 gig), 4 port Intel NIC and wireless intel NIC all installed and all was recognized. Everything has been up and running since Saturday. Not sure what I am going to do with the wireless card kind of put it in just because I could. I am only using 2 of the 4 ports on the card at the moment so not sure how it will act once I use them all.

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      Nice. A good result all round then.  :)
                      Post some throughput numbers if you do any testing, always useful.

                      Steve

                      1 Reply Last reply Reply Quote 0
                      • W
                        w1ll1am
                        last edited by

                        @stephenw10:

                        Nice. A good result all round then.  :)
                        Post some throughput numbers if you do any testing, always useful.

                        Steve

                        I have Time Warner Cable internet I am paying for 15 Mbps down and 1 Mbps up. I didn't test the upload but I did a test on IPv6 and IPv4 below are the results. The IPv6 is slower.

                        IPv4
                        [2.1.3-RELEASE][admin@pfsense.scanlon]/root(4): fetch -o /dev/null http://cachefly.cachefly.net/100mb.test
                        /dev/null                                    100% of  100 MB 1713 kBps 00m00s

                        IPv6
                        [2.1.3-RELEASE][admin@pfsense.scanlon]/root(5): fetch -o /dev/null http://ipv6.download.thinkbroadband.com/100MB.zip
                        /dev/null                                    100% of  100 MB 1343 kBps 00m00s

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          That's really just limited by your WAN speed then. A more interesting test would be between two internal interfaces, both on the PCI-X card.

                          Steve

                          1 Reply Last reply Reply Quote 0
                          • W
                            w1ll1am
                            last edited by

                            @stephenw10:

                            That's really just limited by your WAN speed then. A more interesting test would be between two internal interfaces, both on the PCI-X card.

                            Steve

                            I was looking at ways to test that. I came across iperf so I am going to try it out later. I will post my results

                            1 Reply Last reply Reply Quote 0
                            • W
                              w1ll1am
                              last edited by

                              iperf results (This is between a client and pfsense) I am doing this remotely so I only have access to pfsense and the client I am typing on now.

                              Pfsense was the server.

                              TCP test

                               iperf -c 192.168.1.1 -t 20 -w 100k -P 20
                              

                              Output

                              
                              [ ID] Interval       Transfer     Bandwidth
                              [  6]  0.0-20.0 sec  40.8 MBytes  17.1 Mbits/sec
                              [  7]  0.0-20.0 sec  34.6 MBytes  14.5 Mbits/sec
                              [ 12]  0.0-20.0 sec  34.6 MBytes  14.5 Mbits/sec
                              [  5]  0.0-20.0 sec  36.2 MBytes  15.2 Mbits/sec
                              [  4]  0.0-20.0 sec  41.4 MBytes  17.3 Mbits/sec
                              [ 17]  0.0-20.1 sec  36.9 MBytes  15.4 Mbits/sec
                              [ 20]  0.0-20.0 sec  43.6 MBytes  18.3 Mbits/sec
                              [ 16]  0.0-20.1 sec  37.9 MBytes  15.8 Mbits/sec
                              [ 22]  0.0-20.1 sec  35.5 MBytes  14.8 Mbits/sec
                              [  9]  0.0-20.1 sec  37.0 MBytes  15.4 Mbits/sec
                              [ 19]  0.0-20.1 sec  40.2 MBytes  16.8 Mbits/sec
                              [  8]  0.0-20.1 sec  34.6 MBytes  14.4 Mbits/sec
                              [ 13]  0.0-20.1 sec  41.9 MBytes  17.4 Mbits/sec
                              [ 10]  0.0-20.2 sec  41.4 MBytes  17.2 Mbits/sec
                              [  3]  0.0-20.2 sec  40.6 MBytes  16.9 Mbits/sec
                              [ 15]  0.0-20.2 sec  40.0 MBytes  16.6 Mbits/sec
                              [ 11]  0.0-20.2 sec  28.4 MBytes  11.8 Mbits/sec
                              [ 14]  0.0-20.2 sec  31.9 MBytes  13.3 Mbits/sec
                              [ 21]  0.0-20.2 sec  24.8 MBytes  10.3 Mbits/sec
                              [ 18]  0.0-20.2 sec  26.6 MBytes  11.0 Mbits/sec
                              [SUM]  0.0-20.2 sec   729 MBytes   302 Mbits/sec
                              
                              

                              Max was 302 Mbps

                              UDP Test

                              
                              iperf -c 192.168.1.1 -u -b 300m
                              
                              

                              Output

                              
                              Client connecting to 192.168.1.1, UDP port 5001
                              Sending 1470 byte datagrams
                              UDP buffer size:  160 KByte (default)
                              ------------------------------------------------------------
                              [  3] local 192.168.1.2 port 57996 connected with 192.168.1.1 port 5001
                              [ ID] Interval       Transfer     Bandwidth
                              [  3]  0.0-10.0 sec   346 MBytes   290 Mbits/sec
                              [  3] Sent 246971 datagrams
                              [  3] Server Report:
                              [  3]  0.0-10.0 sec   346 MBytes   290 Mbits/sec   0.017 ms   14/246970 (0.0057%)
                              [  3]  0.0-10.0 sec  1 datagrams received out-of-order
                              
                              

                              I am pretty happy with the results. Looks like the CPU is the bottle neck. Ran the TCP test for 200 seconds it transferred 6.83 Gigabytes of data at 293Mbps and the CPU was maxed out.

                              1 Reply Last reply Reply Quote 0
                              • stephenw10S
                                stephenw10 Netgate Administrator
                                last edited by

                                Interesting thanks. Be interesting to see how that compares with running thd server and client on separate mchines behind pfSense on separate interfaces.

                                Steve

                                1 Reply Last reply Reply Quote 0
                                • W
                                  w1ll1am
                                  last edited by

                                  @stephenw10:

                                  Interesting thanks. Be interesting to see how that compares with running thd server and client on separate mchines behind pfSense on separate interfaces.

                                  Steve

                                  My plan was to try that when I get home. I will let you know.

                                  1 Reply Last reply Reply Quote 0
                                  • W
                                    w1ll1am
                                    last edited by

                                    two PCs on the same LAN, same pfsense port.

                                     [SUM]  0.0-20.1 sec  2.16 GBytes   926 Mbits/sec 
                                    

                                    two PCs on two different NIC ports

                                      [SUM]  0.0-20.0 sec   941 MBytes   394 Mbits/sec  
                                    
                                    1 Reply Last reply Reply Quote 0
                                    • stephenw10S
                                      stephenw10 Netgate Administrator
                                      last edited by

                                      Actually faster than just receiving the traffic. I guess it's definitely CPU bound then.
                                      A further interesting test would be to enable IP fastforwarding. That may or may not have a dramatic effect on traffiic that is passed through the box but not terminated there.
                                      It's enabled in System: Advanced: System Tunables:. Set the net.inet.ip.fastforwarding tunable to 1. You may have to reboot to activate that. Be aware that setting that value WILL break IPSec pass-through so if you need that disable it again after the test.

                                      Steve

                                      1 Reply Last reply Reply Quote 0
                                      • W
                                        w1ll1am
                                        last edited by

                                        It seems that maybe I am wrong. The CPU will max out only when using the pfsense box as the server. I wasn't watching it when looking at the above tests. I set the fast-forward option and get about the same speeds. I believe the best was 394Mbps. The CPU only reached 67% and only for a moment the average for the 20 second test was probably 63%. So I am guessing that the PCI bus? is limiting the card to ~400Mbps?

                                        1 Reply Last reply Reply Quote 0
                                        • stephenw10S
                                          stephenw10 Netgate Administrator
                                          last edited by

                                          Yes. You might expect a maximum throughput of half the bandwidth which would be ~512Mbps however that doesn't allow for any return traffic, error-correction, ACKs etc. Did you see any reduction in CPU use with IP fastforwarding enabled?

                                          Steve

                                          1 Reply Last reply Reply Quote 0
                                          • W
                                            w1ll1am
                                            last edited by

                                            Pictures of the box. it was a pretty tight fit. I had to cut the SATA cable strain relief so it would bend enough when the PCI right angel adapter pressed on it. Tried uploading these pictures to this post but they were too big.

                                            https://drive.google.com/file/d/0B2OPLQVuFuhDNFh0WmhhazJrUmJHUngxM0FHR1BiQjRLTUZv/edit?usp=sharing

                                            https://drive.google.com/file/d/0B2OPLQVuFuhDbF9xdEdkbGtwaWdzTTlvbW9MWWN5bVg1QmVR/edit?usp=sharing

                                            https://drive.google.com/file/d/0B2OPLQVuFuhDSUJJMzZYVHU5SkwtNk9jcGtxWTBaRWNvUnFr/edit?usp=sharing

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.