Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Help me tune this amazing system :) *EVERYONE COME IN AND READ!*

    Scheduled Pinned Locked Moved General pfSense Questions
    13 Posts 7 Posters 4.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • E
      elementalwindx
      last edited by

      Ok here is the setup:

      Trendnet Green Gigabit Router - Capable of 32GBps with all ports used. ( http://www.newegg.com/Product/Product.aspx?Item=N82E16833156293 )

      Custom Asus built 2011 SBS STD server that can read/write at 3GB+/sec using RAID 10 with 4 ssd hard drives (benchmarked) connected to a Trendnet green gigabit switch using 2 intel onboard gigabit nics teamed together.

      Client computers are all equipped with the same SSD hard drive, 1 per PC setup regular with Windows 7 Pro x64 on the domain. These are all intel motherboards using onboard nics.

      The router is a ASUS i3 intel with 8GB DDR3 pfsense box with SSD hard drive using pci 1x intel gigabit nics.

      Transferring a 1.04GB Office 2010 install file, I can only get a max transfer rate of 110-130MB/sec

      Now I figured with a setup such as this, I would get WAY faster file transfer speeds. What is limiting me? Everything on the network is gigabit. Slowest networking component I can think of is the on board gigabit ports.

      Below I have tried various situations/combinations to try and get more bandwidth:

      1 nic (server) and 1 nic (client) and I get 70MB/sec~

      teamed 3 nics on the server getting a 3GB connection, teaming 2 nics together on the client getting a 2GB connection. So now we have 3 nic (server) and 2 nic (client) both teamed, and I still get 110MB/sec-130MB/sec with 0 improvement.

      I ALSO eliminated the switch/router and did 2 nics in the server and 2 in the client all teamed with the server-client directly connected together and only got 70MB/sec

      I ALSO did it directly with just 1 nic on server and client and netted 70MB/sec directly connected.

      So far best speed scenario is 2 nics on server teamed and 1 nic on client going thru switch and router netting me 110-130MB/sec

      I'm now guessing my limitation is in the pfsense somewhere as I have tried to change the nic settings to use dedicated processor cores and maxed the input/output threads, and even turned on jumbo frames to its highest setting (9MB) and still cannot improve past 110-130MB/sec.

      Honestly though if you transfer a file over SMB/network from server to client, does it even go through the router at all? or does it just go server -> switch -> client?

      Just to add I added another nic to the pfsense and created a LAGG LAN setup under 192.168.10.254 and it seems to have improved performance maybe 2-5%….that could be my imagination though.

      Ideas anyone? :/

      1 Reply Last reply Reply Quote 0
      • stephenw10S
        stephenw10 Netgate Administrator
        last edited by

        First off, yes, if the client and server are in the same subnet then traffic goes only via the switch and not the router.

        I'm not sure if you have confused your bits (b) and bytes (B) since 110-130MBps is the maximum throughput for a gigabit NIC.
        Lastly do you mean PCI or PCIe? If you are using PCI you have to be careful you're not running out of bus bandwidth which is ~1Gbps (for a 32bit 33Mhz bus).

        Steve

        1 Reply Last reply Reply Quote 0
        • E
          elementalwindx
          last edited by

          @stephenw10:

          First off, yes, if the client and server are in the same subnet then traffic goes only via the switch and not the router.

          I'm not sure if you have confused your bits (b) and bytes (B) since 110-130MBps is the maximum throughput for a gigabit NIC.
          Lastly do you mean PCI or PCIe? If you are using PCI you have to be careful you're not running out of bus bandwidth which is ~1Gbps (for a 32bit 33Mhz bus).

          Steve

          No I'm not confusing bits and bytes :) I was just hoping there was some crazy way to stretch the limit. Maybe doing something like teaming, or bonding. Even if it meant putting 2 nics in each desktop. All the nics were pci 1x used in 8x and 16x ports. (Yes I know they are not faster if you put a 1x in a 8x, etc, its just what the motherboard had for slots.)

          1 Reply Last reply Reply Quote 0
          • J
            janneb
            last edited by

            Trunking/Teaming/Link aggregation works in many different ways but none I've heard of will accelerate the speed of one(1) CIFS connection to 2+ gigabit. You need multiple connections preferably from different protocols to make use of 2+ gigabit trunks. Your're not actually creating a 2Gbit port, you have two or more load-balanced 1Gbit ports. Correct me if I'm wrong. Like Steve said 130MBps is great throughput

            1 Reply Last reply Reply Quote 0
            • J
              janneb
              last edited by

              @elementalwindx:

              No I'm not confusing bits and bytes :) I was just hoping there was some crazy way to stretch the limit. Maybe doing something like teaming, or bonding. Even if it meant putting 2 nics in each desktop. All the nics were pci 1x used in 8x and 16x ports. (Yes I know they are not faster if you put a 1x in a 8x, etc, its just what the motherboard had for slots.)

              Keep the trunk on the "server" and try two simultaneous transfers from/to two clients. If the speed does not drop to 60MBps youre good

              1 Reply Last reply Reply Quote 0
              • E
                elementalwindx
                last edited by

                @janneb:

                Trunking/Teaming/Link aggregation works in many different ways but none I've heard of will accelerate the speed of one(1) CIFS connection to 2+ gigabit. You need multiple connections preferably from different protocols to make use of 2+ gigabit trunks. Your're not actually creating a 2Gbit port, you have two or more load-balanced 1Gbit ports. Correct me if I'm wrong. Like Steve said 130MBps is great throughput

                Yea I figured this was pretty impossible to do. :/ I wish network technology could keep up with SSD :)

                Know of any windows based software that will let me stress test my entire network to see just how many computers can run at max throughput before it starts to lower the bandwidth between them all?

                Something like leave the program running on 1 pc, then start it on a 2nd leave it running and see, then a 3rd, then a 4th.

                I have 4 client computers here that will be on this network, plus several laptops/tablets, but I want to see if it will slow down even with 4 computers at full demand evil grin

                1 Reply Last reply Reply Quote 0
                • H
                  heper
                  last edited by

                  iperf, works on most common operating systems

                  http://sourceforge.net/projects/iperf/

                  1 Reply Last reply Reply Quote 0
                  • M
                    matguy
                    last edited by

                    @heper:

                    iperf, works on most common operating systems

                    http://sourceforge.net/projects/iperf/

                    iperf is good, although the Windows version may not scale past 1Gb (I tried using it to test some 10Gb links for a VMWare host, used *nix instead.)  But, that probably won't be a problem for your current task (although may have been for your previous goals.)

                    1 Reply Last reply Reply Quote 0
                    • E
                      elementalwindx
                      last edited by

                      Man I wish I had a better understanding of iperf….. check out these results between my server and my client.

                      C:\iperf>iperf -f MBytes -p 2012 -c 192.168.10.2 -w 2000000000

                      Client connecting to 192.168.10.2, TCP port 2012
                      TCP window size: 1907 MByte

                      [  3] local 192.168.10.100 port 51139 connected with 192.168.10.2 port 2012
                      [ ID] Interval       Transfer     Bandwidth
                      [  3]  0.0-10.0 sec  2985 MBytes   298 MBytes/sec

                      This below is between my pfsense (client) and my server (server) running the same command as above.

                      –----------------------------------------------------------
                      Client connecting to 192.168.10.2, TCP port 2012
                      TCP window size: 65.0 KByte (WARNING: requested 1.86 GByte)

                      [  8] local 192.168.10.254 port 6616 connected with 192.168.10.2 port 2012
                      [ ID] Interval       Transfer     Bandwidth
                      [  8]  0.0-10.0 sec  1.07 GBytes   109 MBytes/sec

                      X_X

                      Can someone give me a good idea of what flags I should use on the server and the client? I want to test 4 desktops hitting up the server at full force and seeing what bandwidth it can handle. I'll try 2, then 3, then all 4 at once.

                      1 Reply Last reply Reply Quote 0
                      • stephenw10S
                        stephenw10 Netgate Administrator
                        last edited by

                        At the very least you should try to get the same TCP window size for both tests to get a realistic comparison.
                        A 64k window seems far more sensible than a 1.8GB window. Though I confess I've never really considered it until now. The default window size is usually 8 or 16k. If you want to specify the total amount of data sent use the -n flag (number of bytes).

                        Steve

                        1 Reply Last reply Reply Quote 0
                        • S
                          sleeprae
                          last edited by

                          Windows Server 2012, with built-in NIC teaming (LBFO - Load Balancing and Fail Over) and SMB 3.0 (which features multi-channel SMB) will be able to use as many links as you have for a single file transfer. From what I've been able to tell, the client side (Windows 8) does not inlude the built-in teaming capability, and it's unclear if IHVs will provide updated drivers to support it.

                          1 Reply Last reply Reply Quote 0
                          • E
                            elementalwindx
                            last edited by

                            @sleeprae:

                            Windows Server 2012, with built-in NIC teaming (LBFO - Load Balancing and Fail Over) and SMB 3.0 (which features multi-channel SMB) will be able to use as many links as you have for a single file transfer. From what I've been able to tell, the client side (Windows 8) does not inlude the built-in teaming capability, and it's unclear if IHVs will provide updated drivers to support it.

                            ^
                            Just read up on that! WOW! :) Can't wait!

                            1 Reply Last reply Reply Quote 0
                            • D
                              dreamslacker
                              last edited by

                              If you want to capitalize on the LACP links now, try using Robocopy with MT option.  That turns on multi-threaded mode that allows multiple concurrent connections (provided you are transferring more than 1 file).

                              1 Reply Last reply Reply Quote 0
                              • First post
                                Last post
                              Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.