Old pc or newer box
-
I think because they only require x1 speeds for PCIe v2.0+.
Someone with an x1 PCIe v1.x slot would need all 4 Lanes.
Back in the day the end of the slots on mobos were open, allowing the user to make an intelligent selection.
These days they dummy proof them and in the process inconvenience the <1% who would care. -
I think because they only require x1 speeds for PCIe v2.0+.
Someone with an x1 PCIe v1.x slot would need all 4 Lanes.
Back in the day the end of the slots on mobos were open, allowing the user to make an intelligent selection.
These days they dummy proof them and in the process inconvenience the <1% who would care.Fair enough.
And that's why it's important to also find out the version of the PCI that the motherboard provides/supports.
-
Interesting read, thanks!
Just found this as far as PCIe throughput is concerned.
In my days we had fights between PCI and AGP after ISA slots were abandoned. Well, think I'm getting old. -
Cool, so there are open ended x1 slots, which may fit bigger cards without having to file away anything. Why did they not do this from the get go? Would have saved so much headache.
-
They used to do that. You can find it on old Mobo standards.
They've since stopped doing it (afaik).
I think they stopped because people were then putting things like video cards into open ended slots that couldn't handle the required bandwidths and then assumed the Mobo was bad when it didn't work.
So now they kiddie proof them. -
They used to do that. You can find it on old Mobo standards.
They've since stopped doing it (afaik).
I think they stopped because people were then putting things like video cards into open ended slots that couldn't handle the required bandwidths and then assumed the Mobo was bad when it didn't work.
So now they kiddie proof them.making it inconvenient for us. :(
-
Haha yup, but there's always the option to open it yourself or trim the card - generally speaking you're better off modifying whichever of the two devices is easier to replace.
Then there's riser cards, but those can make it difficult to fit the card into very small cases. 1U cases often work well with risers though.
For pfSense though and the dilineation between j3355 and j3455 people are probably generally better off with the 3355.
It will be better for ooenvpn and will handle GbE routing and firewalling with ease.
The 3455 will mostly shine for a user that needs more significant throughput with Suricata and has either modest or no need of openvpn.
-
I think because they only require x1 speeds for PCIe v2.0+.
Someone with an x1 PCIe v1.x slot would need all 4 Lanes.
That, and because server motherboards basically never come with a x1 slot so for their target market it's a non-issue. Also, by using all 4 lanes of PCIe 2, they theoretically get better performance to/from the buffers on the card. May matter in the target market, but much less relevant for a firewall and completely irrelevant for a low power home firewall.
-
What option should I go for bearing in mind cost, internet speed up to 200mbps, and openvpn usage and the need for AES-NI.
APU2C4 would be nice if there will be no other needs, offered services, enabled functions or installed packets.
The Qotom box will be also nice but also with more horse power for installing and running more packets and services. -
Question is why do they make them in x4, if all they require is x1 speed?
It's the differences between PCIe 1.0 vs 2.0 or 3.0. The quad port server NICs that many of us use or recommend absolutely do require 4 lanes of PCIe 1.0 bandwidth to function with full performance.
GPU miners use riser cables all the time, and that works because the x16 GPUs can get away with x1 bandwidth because they're doing massive compute operations on relatively small chunks of data, meaning that the bandwith of the interface isn't a problem. It is a problem with something like an HBA or a NIC, though, especially when that NIC uses only PCIe 1.0.