Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    HP T5740 gigabit over PCIx

    Scheduled Pinned Locked Moved Hardware
    26 Posts 4 Posters 12.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • W
      w1ll1am
      last edited by

      @stephenw10:

      That's really just limited by your WAN speed then. A more interesting test would be between two internal interfaces, both on the PCI-X card.

      Steve

      I was looking at ways to test that. I came across iperf so I am going to try it out later. I will post my results

      1 Reply Last reply Reply Quote 0
      • W
        w1ll1am
        last edited by

        iperf results (This is between a client and pfsense) I am doing this remotely so I only have access to pfsense and the client I am typing on now.

        Pfsense was the server.

        TCP test

         iperf -c 192.168.1.1 -t 20 -w 100k -P 20
        

        Output

        
        [ ID] Interval       Transfer     Bandwidth
        [  6]  0.0-20.0 sec  40.8 MBytes  17.1 Mbits/sec
        [  7]  0.0-20.0 sec  34.6 MBytes  14.5 Mbits/sec
        [ 12]  0.0-20.0 sec  34.6 MBytes  14.5 Mbits/sec
        [  5]  0.0-20.0 sec  36.2 MBytes  15.2 Mbits/sec
        [  4]  0.0-20.0 sec  41.4 MBytes  17.3 Mbits/sec
        [ 17]  0.0-20.1 sec  36.9 MBytes  15.4 Mbits/sec
        [ 20]  0.0-20.0 sec  43.6 MBytes  18.3 Mbits/sec
        [ 16]  0.0-20.1 sec  37.9 MBytes  15.8 Mbits/sec
        [ 22]  0.0-20.1 sec  35.5 MBytes  14.8 Mbits/sec
        [  9]  0.0-20.1 sec  37.0 MBytes  15.4 Mbits/sec
        [ 19]  0.0-20.1 sec  40.2 MBytes  16.8 Mbits/sec
        [  8]  0.0-20.1 sec  34.6 MBytes  14.4 Mbits/sec
        [ 13]  0.0-20.1 sec  41.9 MBytes  17.4 Mbits/sec
        [ 10]  0.0-20.2 sec  41.4 MBytes  17.2 Mbits/sec
        [  3]  0.0-20.2 sec  40.6 MBytes  16.9 Mbits/sec
        [ 15]  0.0-20.2 sec  40.0 MBytes  16.6 Mbits/sec
        [ 11]  0.0-20.2 sec  28.4 MBytes  11.8 Mbits/sec
        [ 14]  0.0-20.2 sec  31.9 MBytes  13.3 Mbits/sec
        [ 21]  0.0-20.2 sec  24.8 MBytes  10.3 Mbits/sec
        [ 18]  0.0-20.2 sec  26.6 MBytes  11.0 Mbits/sec
        [SUM]  0.0-20.2 sec   729 MBytes   302 Mbits/sec
        
        

        Max was 302 Mbps

        UDP Test

        
        iperf -c 192.168.1.1 -u -b 300m
        
        

        Output

        
        Client connecting to 192.168.1.1, UDP port 5001
        Sending 1470 byte datagrams
        UDP buffer size:  160 KByte (default)
        ------------------------------------------------------------
        [  3] local 192.168.1.2 port 57996 connected with 192.168.1.1 port 5001
        [ ID] Interval       Transfer     Bandwidth
        [  3]  0.0-10.0 sec   346 MBytes   290 Mbits/sec
        [  3] Sent 246971 datagrams
        [  3] Server Report:
        [  3]  0.0-10.0 sec   346 MBytes   290 Mbits/sec   0.017 ms   14/246970 (0.0057%)
        [  3]  0.0-10.0 sec  1 datagrams received out-of-order
        
        

        I am pretty happy with the results. Looks like the CPU is the bottle neck. Ran the TCP test for 200 seconds it transferred 6.83 Gigabytes of data at 293Mbps and the CPU was maxed out.

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by

          Interesting thanks. Be interesting to see how that compares with running thd server and client on separate mchines behind pfSense on separate interfaces.

          Steve

          1 Reply Last reply Reply Quote 0
          • W
            w1ll1am
            last edited by

            @stephenw10:

            Interesting thanks. Be interesting to see how that compares with running thd server and client on separate mchines behind pfSense on separate interfaces.

            Steve

            My plan was to try that when I get home. I will let you know.

            1 Reply Last reply Reply Quote 0
            • W
              w1ll1am
              last edited by

              two PCs on the same LAN, same pfsense port.

               [SUM]  0.0-20.1 sec  2.16 GBytes   926 Mbits/sec 
              

              two PCs on two different NIC ports

                [SUM]  0.0-20.0 sec   941 MBytes   394 Mbits/sec  
              
              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                Actually faster than just receiving the traffic. I guess it's definitely CPU bound then.
                A further interesting test would be to enable IP fastforwarding. That may or may not have a dramatic effect on traffiic that is passed through the box but not terminated there.
                It's enabled in System: Advanced: System Tunables:. Set the net.inet.ip.fastforwarding tunable to 1. You may have to reboot to activate that. Be aware that setting that value WILL break IPSec pass-through so if you need that disable it again after the test.

                Steve

                1 Reply Last reply Reply Quote 0
                • W
                  w1ll1am
                  last edited by

                  It seems that maybe I am wrong. The CPU will max out only when using the pfsense box as the server. I wasn't watching it when looking at the above tests. I set the fast-forward option and get about the same speeds. I believe the best was 394Mbps. The CPU only reached 67% and only for a moment the average for the 20 second test was probably 63%. So I am guessing that the PCI bus? is limiting the card to ~400Mbps?

                  1 Reply Last reply Reply Quote 0
                  • stephenw10S
                    stephenw10 Netgate Administrator
                    last edited by

                    Yes. You might expect a maximum throughput of half the bandwidth which would be ~512Mbps however that doesn't allow for any return traffic, error-correction, ACKs etc. Did you see any reduction in CPU use with IP fastforwarding enabled?

                    Steve

                    1 Reply Last reply Reply Quote 0
                    • W
                      w1ll1am
                      last edited by

                      Pictures of the box. it was a pretty tight fit. I had to cut the SATA cable strain relief so it would bend enough when the PCI right angel adapter pressed on it. Tried uploading these pictures to this post but they were too big.

                      https://drive.google.com/file/d/0B2OPLQVuFuhDNFh0WmhhazJrUmJHUngxM0FHR1BiQjRLTUZv/edit?usp=sharing

                      https://drive.google.com/file/d/0B2OPLQVuFuhDbF9xdEdkbGtwaWdzTTlvbW9MWWN5bVg1QmVR/edit?usp=sharing

                      https://drive.google.com/file/d/0B2OPLQVuFuhDSUJJMzZYVHU5SkwtNk9jcGtxWTBaRWNvUnFr/edit?usp=sharing

                      1 Reply Last reply Reply Quote 0
                      • R
                        Riddler652
                        last edited by

                        Looks great - one question - where did you get the power from to run your fan?

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.