Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Help me tune this amazing system :) *EVERYONE COME IN AND READ!*

    Scheduled Pinned Locked Moved General pfSense Questions
    13 Posts 7 Posters 4.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      First off, yes, if the client and server are in the same subnet then traffic goes only via the switch and not the router.

      I'm not sure if you have confused your bits (b) and bytes (B) since 110-130MBps is the maximum throughput for a gigabit NIC.
      Lastly do you mean PCI or PCIe? If you are using PCI you have to be careful you're not running out of bus bandwidth which is ~1Gbps (for a 32bit 33Mhz bus).

      Steve

      1 Reply Last reply Reply Quote 0
      • E
        elementalwindx
        last edited by

        @stephenw10:

        First off, yes, if the client and server are in the same subnet then traffic goes only via the switch and not the router.

        I'm not sure if you have confused your bits (b) and bytes (B) since 110-130MBps is the maximum throughput for a gigabit NIC.
        Lastly do you mean PCI or PCIe? If you are using PCI you have to be careful you're not running out of bus bandwidth which is ~1Gbps (for a 32bit 33Mhz bus).

        Steve

        No I'm not confusing bits and bytes :) I was just hoping there was some crazy way to stretch the limit. Maybe doing something like teaming, or bonding. Even if it meant putting 2 nics in each desktop. All the nics were pci 1x used in 8x and 16x ports. (Yes I know they are not faster if you put a 1x in a 8x, etc, its just what the motherboard had for slots.)

        1 Reply Last reply Reply Quote 0
        • J
          janneb
          last edited by

          Trunking/Teaming/Link aggregation works in many different ways but none I've heard of will accelerate the speed of one(1) CIFS connection to 2+ gigabit. You need multiple connections preferably from different protocols to make use of 2+ gigabit trunks. Your're not actually creating a 2Gbit port, you have two or more load-balanced 1Gbit ports. Correct me if I'm wrong. Like Steve said 130MBps is great throughput

          1 Reply Last reply Reply Quote 0
          • J
            janneb
            last edited by

            @elementalwindx:

            No I'm not confusing bits and bytes :) I was just hoping there was some crazy way to stretch the limit. Maybe doing something like teaming, or bonding. Even if it meant putting 2 nics in each desktop. All the nics were pci 1x used in 8x and 16x ports. (Yes I know they are not faster if you put a 1x in a 8x, etc, its just what the motherboard had for slots.)

            Keep the trunk on the "server" and try two simultaneous transfers from/to two clients. If the speed does not drop to 60MBps youre good

            1 Reply Last reply Reply Quote 0
            • E
              elementalwindx
              last edited by

              @janneb:

              Trunking/Teaming/Link aggregation works in many different ways but none I've heard of will accelerate the speed of one(1) CIFS connection to 2+ gigabit. You need multiple connections preferably from different protocols to make use of 2+ gigabit trunks. Your're not actually creating a 2Gbit port, you have two or more load-balanced 1Gbit ports. Correct me if I'm wrong. Like Steve said 130MBps is great throughput

              Yea I figured this was pretty impossible to do. :/ I wish network technology could keep up with SSD :)

              Know of any windows based software that will let me stress test my entire network to see just how many computers can run at max throughput before it starts to lower the bandwidth between them all?

              Something like leave the program running on 1 pc, then start it on a 2nd leave it running and see, then a 3rd, then a 4th.

              I have 4 client computers here that will be on this network, plus several laptops/tablets, but I want to see if it will slow down even with 4 computers at full demand evil grin

              1 Reply Last reply Reply Quote 0
              • H
                heper
                last edited by

                iperf, works on most common operating systems

                http://sourceforge.net/projects/iperf/

                1 Reply Last reply Reply Quote 0
                • M
                  matguy
                  last edited by

                  @heper:

                  iperf, works on most common operating systems

                  http://sourceforge.net/projects/iperf/

                  iperf is good, although the Windows version may not scale past 1Gb (I tried using it to test some 10Gb links for a VMWare host, used *nix instead.)  But, that probably won't be a problem for your current task (although may have been for your previous goals.)

                  1 Reply Last reply Reply Quote 0
                  • E
                    elementalwindx
                    last edited by

                    Man I wish I had a better understanding of iperf….. check out these results between my server and my client.

                    C:\iperf>iperf -f MBytes -p 2012 -c 192.168.10.2 -w 2000000000

                    Client connecting to 192.168.10.2, TCP port 2012
                    TCP window size: 1907 MByte

                    [  3] local 192.168.10.100 port 51139 connected with 192.168.10.2 port 2012
                    [ ID] Interval       Transfer     Bandwidth
                    [  3]  0.0-10.0 sec  2985 MBytes   298 MBytes/sec

                    This below is between my pfsense (client) and my server (server) running the same command as above.

                    –----------------------------------------------------------
                    Client connecting to 192.168.10.2, TCP port 2012
                    TCP window size: 65.0 KByte (WARNING: requested 1.86 GByte)

                    [  8] local 192.168.10.254 port 6616 connected with 192.168.10.2 port 2012
                    [ ID] Interval       Transfer     Bandwidth
                    [  8]  0.0-10.0 sec  1.07 GBytes   109 MBytes/sec

                    X_X

                    Can someone give me a good idea of what flags I should use on the server and the client? I want to test 4 desktops hitting up the server at full force and seeing what bandwidth it can handle. I'll try 2, then 3, then all 4 at once.

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      At the very least you should try to get the same TCP window size for both tests to get a realistic comparison.
                      A 64k window seems far more sensible than a 1.8GB window. Though I confess I've never really considered it until now. The default window size is usually 8 or 16k. If you want to specify the total amount of data sent use the -n flag (number of bytes).

                      Steve

                      1 Reply Last reply Reply Quote 0
                      • S
                        sleeprae
                        last edited by

                        Windows Server 2012, with built-in NIC teaming (LBFO - Load Balancing and Fail Over) and SMB 3.0 (which features multi-channel SMB) will be able to use as many links as you have for a single file transfer. From what I've been able to tell, the client side (Windows 8) does not inlude the built-in teaming capability, and it's unclear if IHVs will provide updated drivers to support it.

                        1 Reply Last reply Reply Quote 0
                        • E
                          elementalwindx
                          last edited by

                          @sleeprae:

                          Windows Server 2012, with built-in NIC teaming (LBFO - Load Balancing and Fail Over) and SMB 3.0 (which features multi-channel SMB) will be able to use as many links as you have for a single file transfer. From what I've been able to tell, the client side (Windows 8) does not inlude the built-in teaming capability, and it's unclear if IHVs will provide updated drivers to support it.

                          ^
                          Just read up on that! WOW! :) Can't wait!

                          1 Reply Last reply Reply Quote 0
                          • D
                            dreamslacker
                            last edited by

                            If you want to capitalize on the LACP links now, try using Robocopy with MT option.  That turns on multi-threaded mode that allows multiple concurrent connections (provided you are transferring more than 1 file).

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post
                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.