Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Quick help, min. hardware req. for 6x 1gbps nic

    Scheduled Pinned Locked Moved Hardware
    20 Posts 6 Posters 6.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      josey
      last edited by

      i will test that, thanks !

      1 Reply Last reply Reply Quote 0
      • valnarV
        valnar
        last edited by

        than i put 6 PCI 1gbps nics inside, and then is starts  to freeze randomly.

        :o Yep.  That's the problem, per what wallabybob stated.

        Although my recommendation above would still build a kick-ass Server.  ;D

        1 Reply Last reply Reply Quote 0
        • J
          jonnytabpni
          last edited by

          What you could do, is maybe work out a "better" network topology..

          I know it's nice to keep things simple by having all connections go into one box, but is this the only way to achieve this? I know you are trying to get 6 optical lines in, but what are you doing with these? Are you doing a load balancing configuration? Do you have multiple subnets? Failover?

          Would it be possible to split your load over, lets say 3 boxes, and route between them when required?

          You'll find that while pfsense is a great firewall, there is a reason why <enter popular="" hardware="" firewall="" vendor="" here="">gear is so damn expensive. A lot of 3rd party gear have dedicated network processors (In addition to the regular CPU) which split up work efficiently, where as pfsense is just running on standard hardware.

          Can you maybe give us insight into what exactly you are trying to do, and then we could maybe work out a nicer topology for you?

          :)</enter>

          1 Reply Last reply Reply Quote 0
          • J
            josey
            last edited by

            it is an extended star topology and i wan to rise all network to 1gbps… it is only test phase....

            i did a little testing, and network recording, and lol, it seems that i have continious network traffic per interface lower than 10mbps

            about 50mbps total :)
            and only in peaks i get 150mbps, maybe one per day per one sec. :)

            but, i will still get some server machine and try this configuration ;)

            guys, again, thanks for answers !

            1 Reply Last reply Reply Quote 0
            • J
              josey
              last edited by

              i did some testing and research …
              PCI have throughput around 130mbps, so 1000mbps nic dont have any sense ...
              But, i try to test it and if traffic graph in PFS is accurate, i have highest throughput from one to another interface 300mbps. Which also does not have any logic, if maximum throughput is around 130mbps per PCI slot.
              Also hdds in standard PCs are just to slow i had to copy files over network with several PCs.

              PFS machine od 300mbps is using about 25% of cpu (amd 3ghz)

              It seems there is no point to force 1gbps on lan :)

              thanks guys

              1 Reply Last reply Reply Quote 0
              • GruensFroeschliG
                GruensFroeschli
                last edited by

                PCI caps at ~130 Mbyte/s not Mbit/s (About 1000 Mbit/s)
                If you achieve 300mbps this is Mbit/s.
                Also keep in mind that you are measuring "throughput". –> 300mbps throughput is actually 600 mpbs load on the PCI bus (traffic going in and the same traffic leaving out).

                To test throughput you dont have to copy files around with multiple computers.
                You can simply use Iperf.

                We do what we must, because we can.

                Asking questions the smart way: http://www.catb.org/esr/faqs/smart-questions.html

                1 Reply Last reply Reply Quote 0
                • J
                  josey
                  last edited by

                  actually you are right, 300mbps (i know it is mbits) is on one interface in on another out, but i was sure that PCI is only 130 mbits not mbytes.
                  also, i want to try in live how much can i hypothecate interfaces, with live and possible traffic.

                  1 Reply Last reply Reply Quote 0
                  • W
                    wallabybob
                    last edited by

                    Standard PCI can do a "burst" transfer of 4 Bytes at a rate of 33MHz, so has a absolute maximum transfer rate of 4x33MByes/sec = 133MBytes/sec or 1064Mbits/sec. Practical transfer rates are typically rather less than this because burst length is limited (to allow other devices a chance to use the bus) and a group of burst transfers has to be preceded by an address cycle. So if burst length was limited to 8 cycles there would be an address cycle before the 8 burst cycles, yielding an overhead of 1 in 9.

                    I think I've seen that in optimum conditions a good PCI NIC can get a bit more than 600Mbits/sec throughput and thats using big frames and counting all the protocol overheads.

                    Hence the attraction of PCI Express where a single slot has over 2Gbits/sec throughput and each slot is an independent bus.

                    1 Reply Last reply Reply Quote 0
                    • J
                      josey
                      last edited by

                      yup done some tests again with iperf,
                      pfs cant consume more than near 600mbps (up to 280mbps per interface) and thats it …
                      from machine to machine through switch i get from 650 to 850mbps
                      so it seems that i have to get something faster, with PCI-e NICs :)

                      guys did you test throughput with PCI e risers? with two double nics per slot?
                      what is situation then ?

                      1 Reply Last reply Reply Quote 0
                      • W
                        wallabybob
                        last edited by

                        @josey:

                        guys did you test throughput with PCI e risers? with two double nics per slot?
                        what is situation then ?

                        How would risers make a difference?
                        Two double NICs per slot? Do you mean a quad port card?

                        PCI Express (PCIe) is a serial bus. PCIe slots can have multiple "lanes" (typically 1, 4, 8 or 16) which multiply the bandwidth of the slot (a serial-parallel combination). If I recall correctly, a PCIe V1 lane would have about 2Gbps bandwidth while a PCIe V2 lane has about double that.
                        If you want a 4 port card to be able to drive all ports at line rate, both in and out concurrently, then you need at least 4 lanes. Hence you need a x4 slot (or x8 slot or x16 slot).

                        1 Reply Last reply Reply Quote 0
                        • J
                          josey
                          last edited by

                          no i mean double port card not quad, why? it is cheaper :)
                          Why would riser make difference,
                          well since is one slot with riser that have 2 slots, and 2 nics… so i asked...

                          many thanks for answers!

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.