Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Need more than 1Gb/s LAN - How can I get there?

    Scheduled Pinned Locked Moved Hardware
    20 Posts 8 Posters 4.5k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • 0
      0x10C
      last edited by

      I know this isn't really a PFSense specific question as this is more about the switch behind my PFSense box.

      But here is my question, how can I get more than 1Gb/s on my LAN? - I've considered two options.

      1. Link Aggregation - But I don't really know how this works and if it does what I want.
      2. 10Gb network cards + 10Gb switch

      Here is my situation, I have a desktop and a server. The server contains all my storage, 1Gb (about 112MB/s in my environment) is just not cutting it. I'd really like 400MB/s or more.

      I was thinking about using Link Aggregation to bond together four Ethernet 1Gb, but I'm not sure if doing this actually gives me 4x1Gb combined, giving me basically 4Gb network speeds or simply takes a single 1Gb link and splits it across multiple ports/cables so if I had 4 ports I'd get 250Mb/s through each cable culminating in 1Gb max.

      Then the other option is 10Gb. But it looks pricey. I'd need to spend about £300 on just the network cards (sourced from ebay) then I'm looking at another £130 or so for a 10Gb switch. That's if I go with Ethernet, it seems that the cards for Fiber are cheaper.

      Does anyone have any thoughts on this? Am I barking up the wrong tree with Link Aggregation, are 10Gb cards worth the expense?

      Thanks for any replies :)

      1 Reply Last reply Reply Quote 0
      • K
        kejianshi
        last edited by

        Get 10GB/s everything….

        NIC, switch...  All of it.

        1 Reply Last reply Reply Quote 0
        • 0
          0x10C
          last edited by

          Do you think I should go with 10Gb ethernet or SFP+?

          I noticed that the Brocade 10Gb cards which use SFP+ are a lot more affordable than Intel Ethernet 10Gb cards for example. I'm just wondering if anyone knows which ones are best?

          I'm using Windows 2012 R2 on my server and Windows 8.1 on my Desktop to give you an idea driver wise.

          1 Reply Last reply Reply Quote 0
          • K
            kejianshi
            last edited by

            Well, I'd go with 10GB ethernet unless I needed to join a few switches together at long distances.
            There are a few people here who are commonly networking with 10gb equipment.

            I'd wait for a few other replies before I made up my mind.

            But me - I'd go with 10GB ethernet with 2 or 4 SFP+ slots included also.

            1 Reply Last reply Reply Quote 0
            • 0
              0x10C
              last edited by

              Yeah that makes sense. I guess Ethernet would be easier for my environment actually. I've not used SFP+ at all yet. If anyone looking at the thread has a recommendation for a simple 10Gb Ethernet switch I'd love to hear  ;D Also any recommendations for cards would be great also. I was thinking the Intel X520 or X540.

              1 Reply Last reply Reply Quote 0
              • DerelictD
                Derelict LAYER 8 Netgate
                last edited by

                And be careful.  SFP+ cards will still need transceiver modules.

                Chattanooga, Tennessee, USA
                A comprehensive network diagram is worth 10,000 words and 15 conference calls.
                DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
                Do Not Chat For Help! NO_WAN_EGRESS(TM)

                1 Reply Last reply Reply Quote 0
                • 0
                  0x10C
                  last edited by

                  @Derelict:

                  And be careful.  SFP+ cards will still need transceiver modules.

                  I figured they needed something like that from the shops I saw bundling the cards with transceivers. I think I'll stick with Ethernet for simplicity sake.

                  1 Reply Last reply Reply Quote 0
                  • K
                    kejianshi
                    last edited by

                    I'm holding my breath waiting for someone to list some CHEAP 10gb ethernet switches and NICs…

                    Turning blue....

                    (-:

                    1 Reply Last reply Reply Quote 0
                    • 0
                      0x10C
                      last edited by

                      I figure I'll have to shell out quite a bit. But I'm willing to make that investment. My motto is, you buy cheap you buy twice. So if I have to pay a bit for quality I'll do it as long as I get the right stuff from the start.

                      1 Reply Last reply Reply Quote 0
                      • K
                        kejianshi
                        last edited by

                        Well then - You should be happy to know that I've never seen any cheap 10gb Ethernet equipment.
                        So, you are all set.

                        But yeah - I do think if you need those speeds its the way to go.

                        1 Reply Last reply Reply Quote 0
                        • K
                          kroberts
                          last edited by

                          You say you have a workstation and a server, would it be feasible to forego the switch for now and direct connect?

                          I don't know if that would work but I'm dying to find out. I'd love to step into 10 gbps for 3 hosts. Contemplating dual port nics in each host and going point to point.

                          1 Reply Last reply Reply Quote 0
                          • 0
                            0x10C
                            last edited by

                            Yes I could definitely do point to point and forego the switch. It's something I'm considering to lower costs actually.

                            1 Reply Last reply Reply Quote 0
                            • P
                              Protoman
                              last edited by

                              @0x10C:

                              Here is my situation, I have a desktop and a server. The server contains all my storage, 1Gb (about 112MB/s in my environment) is just not cutting it. I'd really like 400MB/s or more.

                              I was thinking about using Link Aggregation to bond together four Ethernet 1Gb, but I'm not sure if doing this actually gives me 4x1Gb combined, giving me basically 4Gb network speeds or simply takes a single 1Gb link and splits it across multiple ports/cables so if I had 4 ports I'd get 250Mb/s through each cable culminating in 1Gb max.

                              I just want to make sure there is no confusion between Gigabit (Gb) vs GigaByte (GB). A 1 Gigabit network has a max speed of 125 MegaBytes per second. According to what you have said your network is running at 896 Megabit (112 MegaBytes x 8 bits in a Byte).

                              As for Link Aggregation, it will give you 4 sessions at 1 Gb, and not 1 session at 4 Gb, even if it states 4 Gb on the bonded interface. In my experience and testing traffic will only move across a single interface, and the traffic does not split across several of them.

                              With that said, I setup a 10 Gb network at my work for production servers and for VM storage. The speeds I have seen average about 75 MB/s with occasional spikes to 400 MB/s, and that is with using SSDs and RAID. Even our 1 Gb iSCSI and Fiber iSCSI devices hardly see their max speeds when being used.

                              Obviously without knowing your environment and requirements, it is hard to say if going 10 Gb would be worthwhile or a waste of time and resources. When you do go 10 Gb though you end up finding new bottlenecks like bus speeds, cabling issues, drive speeds, and overhead. As an example, SATA 2 has a max of 300 MB/s and really good SSDs have a max of 600 MB/s, and that's without any bottlenecks.

                              1 Reply Last reply Reply Quote 0
                              • 0
                                0x10C
                                last edited by

                                That's correct. My network right now has a nice sustained transfer speed of 112MB/s aka 1Gb/s with overhead.

                                For example if I were to transfer a 30GB file right now from my server to my desktop it transfers the entire way (SMB 3.0 on both sides) at 112MB/s with no dips. My server is running 9 x 4TB Hitachi disks in RAID6 on a high end hardware card. The desktop is using SSD's with 500MB/s write rates, high end SSD's

                                What I'm looking for is performance around 4Gb/s or lets say around 400MB/s (3.2Gb/s). Anything above that would be nice but not needed.

                                I think what I'm going to do is buy two Intel 10Gb cards and link the Desktop to the Server directly and see what that yields.

                                1 Reply Last reply Reply Quote 0
                                • P
                                  Protoman
                                  last edited by

                                  It'll be interesting to see what happens then with the 10 Gb NICs. If that is not fast enough, I would look at changing from RAID 6 to RAID 10 if possible. I know with my backup server that change made the difference from backups done at noon to backups done when I got to work. Obviously, you lose a bit in capacity, but you no longer have the parity overhead that RAID 5 and 6 have.

                                  1 Reply Last reply Reply Quote 0
                                  • 0
                                    0x10C
                                    last edited by

                                    I get over 600MB/s sustained with my current setup in RAID6. Not worried about that not being able to cope at all :)

                                    1 Reply Last reply Reply Quote 0
                                    • H
                                      Harvy66
                                      last edited by

                                      FYI, SMB3 is capable of using multi-path is nearly every way possible. Be it multiple interfaces, each with different IP addresses or a single interface that is bonded, it will attempt to make multiple connections. One of the things specifically being addressed was bonded interfaces that use hash based teaming.

                                      In theory, you should be able to get near 4Gb/s with a teamed 4x1Gb link with SMB3, assuming the best case of large file transfers or even block-device access. SMB3 is comparible or better than iSCSI.

                                      I'm not sure if this is out of box or you need to configure it, but I remember reading about these major changes.

                                      1 Reply Last reply Reply Quote 0
                                      • A
                                        aus_guy
                                        last edited by

                                        @Derelict:

                                        And be careful.  SFP+ cards will still need transceiver modules.

                                        only if using fiber, if using twinax the transceiver is part of the cable (it also uses less power and has lower latency)
                                        If you want to use cat6a/7 be aware that it will have much higher cable loss and because of this you cant do FCoE

                                        while 10GB Ethernet is ideal it is still so expensive and out of the reach of most home uses (this will change). If you have the spare switch ports test out multiple 1Gb links Server 2012R2 and SMB 3 supports multipathing so where you were limited to the throughput of a single link in a team previously now you should be able to take advantage of all the active links

                                        1 Reply Last reply Reply Quote 0
                                        • DerelictD
                                          Derelict LAYER 8 Netgate
                                          last edited by

                                          Sure.  Plenty of 40Gb twinax out there.

                                          Chattanooga, Tennessee, USA
                                          A comprehensive network diagram is worth 10,000 words and 15 conference calls.
                                          DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
                                          Do Not Chat For Help! NO_WAN_EGRESS(TM)

                                          1 Reply Last reply Reply Quote 0
                                          • B
                                            Beaflag VonRathburg
                                            last edited by

                                            This link might help you out:
                                            https://forums.servethehome.com/index.php?threads/basic-10gb-setup-for-two-pcs.2105/

                                            It leans towards an inexpensive point to point QDR infiniband setup. Two nics can be had for around $100 ish and connect x-2 support RDMA with the OS(s) you're using. Could save you a bit of money over going 10gbe and it's quite a bit quicker.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.