Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Old pc or newer box

    Scheduled Pinned Locked Moved Hardware
    25 Posts 7 Posters 3.2k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • I
      Inxsible
      last edited by

      @TS_b:

      It's not itx that's what I said in my earlier post, you have to buy a bigger board.

      The x16 isn't a factor in NICs, no dual or quad modern Ethernet nics are x16,  they are x4. And a quad port gigabit nic only needs x1 pcie v2.0 to Max out it's sued.

      Agreed. My earlier post said the same thing that I have only found x4 LAN cards. Question is why do they make them in x4, if all they require is x1 speed?

      1 Reply Last reply Reply Quote 0
      • T
        TS_b Banned
        last edited by

        I think because they only require x1 speeds for PCIe v2.0+.

        Someone with an x1 PCIe v1.x slot would need all 4 Lanes.

        Back in the day the end of the slots on mobos were open, allowing the user to make an intelligent selection.
        These days they dummy proof them and in the process inconvenience the <1% who would care.

        1 Reply Last reply Reply Quote 0
        • I
          Inxsible
          last edited by

          @TS_b:

          I think because they only require x1 speeds for PCIe v2.0+.

          Someone with an x1 PCIe v1.x slot would need all 4 Lanes.

          Back in the day the end of the slots on mobos were open, allowing the user to make an intelligent selection.
          These days they dummy proof them and in the process inconvenience the <1% who would care.

          Fair enough.

          And that's why it's important to also find out the version of the PCI that the motherboard provides/supports.

          1 Reply Last reply Reply Quote 0
          • jahonixJ
            jahonix
            last edited by

            Interesting read, thanks!

            Just found this as far as PCIe throughput is concerned.
            In my days we had fights between PCI and AGP after ISA slots were abandoned. Well, think I'm getting old.

            1 Reply Last reply Reply Quote 0
            • I
              Inxsible
              last edited by

              Cool, so there are open ended x1 slots, which may fit bigger cards without having to file away anything. Why did they not do this from the get go? Would have saved so much headache.

              1 Reply Last reply Reply Quote 0
              • T
                TS_b Banned
                last edited by

                They used to do that. You can find it on old Mobo standards.

                They've since stopped doing it (afaik).

                I think they stopped because people were then putting things like video cards into open ended slots that couldn't handle the required bandwidths and then assumed the Mobo was bad when it didn't work.
                So now they kiddie proof them.

                1 Reply Last reply Reply Quote 0
                • I
                  Inxsible
                  last edited by

                  @TS_b:

                  They used to do that. You can find it on old Mobo standards.

                  They've since stopped doing it (afaik).

                  I think they stopped because people were then putting things like video cards into open ended slots that couldn't handle the required bandwidths and then assumed the Mobo was bad when it didn't work.
                  So now they kiddie proof them.

                  making it inconvenient for us. :(

                  1 Reply Last reply Reply Quote 0
                  • T
                    TS_b Banned
                    last edited by

                    Haha yup, but there's always the option to open it yourself or trim the card - generally speaking you're better off modifying whichever of the two devices is easier to replace.

                    Then there's riser cards, but those can make it difficult to fit the card into very small cases. 1U cases often work well with risers though.

                    For pfSense though and the dilineation between j3355 and j3455 people are probably generally better off with the 3355.

                    It will be better for ooenvpn and will handle GbE routing and firewalling with ease.

                    The 3455 will mostly shine for a user that needs more significant throughput with Suricata and has either modest or no need of openvpn.

                    1 Reply Last reply Reply Quote 0
                    • V
                      VAMike
                      last edited by

                      @TS_b:

                      I think because they only require x1 speeds for PCIe v2.0+.

                      Someone with an x1 PCIe v1.x slot would need all 4 Lanes.

                      That, and because server motherboards basically never come with a x1 slot so for their target market it's a non-issue. Also, by using all 4 lanes of PCIe 2, they theoretically get better performance to/from the buffers on the card. May matter in the target market, but much less relevant for a firewall and completely irrelevant for a low power home firewall.

                      1 Reply Last reply Reply Quote 0
                      • ?
                        Guest
                        last edited by

                        What option should I go for bearing in mind cost, internet speed up to 200mbps, and openvpn usage and the need for AES-NI.

                        APU2C4 would be nice if there will be no other needs, offered services, enabled functions or installed packets.
                        The Qotom box will be also nice but also with more horse power for installing and running more packets and services.

                        1 Reply Last reply Reply Quote 0
                        • W
                          whosmatt
                          last edited by

                          @Inxsible:

                          Question is why do they make them in x4, if all they require is x1 speed?

                          It's the differences between PCIe 1.0 vs 2.0 or 3.0.  The quad port server NICs that many of us use or recommend absolutely do require 4 lanes of PCIe 1.0 bandwidth to function with full performance.

                          GPU miners use riser cables all the time, and that works because the x16 GPUs can get away with x1 bandwidth because they're doing massive compute operations on relatively small chunks of data, meaning that the bandwith of the interface isn't a problem.  It is a problem with something like an HBA or a NIC, though, especially when that NIC uses only PCIe 1.0.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.