Old pc or newer box
-
J3455 doesn't have a pcie x4 sized slot in the mini itx board. You have to get the larger board from Asus,or buy a single nic or modify a dual or quad port nic…
You can always fit the x4 card in the x1 slot without problems. You will lose the speed however.
There are, in all 4 mini-itx boards available on newegg featuring either J3355 or J3455 (2 for each)
-
ASRock J3355B-ITX – has a x16 slot but works in x2 mode
-
Asus Prime J3355I-C – has a x4 slot but works in x1 mode
-
ASRock J3455-ITX – has a x1 slot and obviously only works in x1 mode
-
ASRock J3455B-ITX – has a x16 slot but works in x2 mode
In fact, in the pictures you can see that not all pins of the slots are connected to the board (especially in the x16 slots – it's very clear). Although checking the board manual is the only sure fire way to confirm whether the particular slot has all the lanes available or not
So any of the 4 you choose, you will lose speed because all the Intel i350-t2 cards that I found are x4 cards. I still have to check if the card itself runs in x4 or some lower mode. What also needs to made sure is the PCIE version that is being used as well, because that is part of what dictates the speed along with the lanes available in the slot.
The above boards have been used by many on this fine forum and maybe someone can tell us more about the PCIe configurations with regards to adding an Intel i350-t2 card on it and the speed it gets. It obviously can work since so many use it, but will it exploit the full potential of the server NIC card?
None of this is correct.
A physical x4 card will not fit in a physical x1 slot without physical modifications or purchasing an x1 to x4+ riser.
There will be no performance or speed penalty in any combination.
PCIe v2.0 at x1 (the slowest) speed will handle four ports of full duplex gigabit throughput without penalty.
The problem has nothing to do with the speed of the card or the board, it has to do with whether or not the card will fit in the slot… Like Legos.
Also, Asus sells a j3455 board with a >x1 slot. That's your only option for j3455 without riser or modification of you want more than a single port nic.
-
-
None of this is correct.
A physical x4 card will not fit in a physical x1 slot without physical modifications or purchasing an x1 to x4+ riser.
Correct. I don't believe I said you can either. I agree I could have been more precise.
There will be no performance or speed penalty in any combination.
So you are saying that an x16 slot working in x1 mode will be just as fast as x16 slot in x16 mode. I don't think so. The number of available lanes do matter in the amount of throughput. And you yourself are saying that "x1 (the slowest) speed…" in the very next comment. So that's a contradiction.
PCIe v2.0 at x1 (the slowest) speed will handle four ports of full duplex gigabit throughput without penalty.
I don't know much about this as I haven't researched how much speed a dual or quad port Lan card would need.
The problem has nothing to do with the speed of the card or the board, it has to do with whether or not the card will fit in the slot… Like Legos.
Two ways to skin that cat.
-
You can file off the end of the x1 slot so that it doesn't block the x4, x8 or x16 card
-
File off the card itself to fit in a smaller slot
Also, Asus sells a j3455 board with a >x1 slot. That's your only option for j3455 without riser or modification of you want more than a single port nic.
Is this Asus J3455 a mini ITX board? Because I haven't found it on Newegg or Amazon. Can you please provide a link?
-
-
It's not itx that's what I said in my earlier post, you have to buy a bigger board.
The x16 isn't a factor in NICs, no dual or quad modern Ethernet nics are x16, they are x4. And a quad port gigabit nic only needs x1 pcie v2.0 to Max out it's sued.
-
The bottom line is that for most people a j3355 is the choice for pfSense.
You'll note that in my earlier post I did state that you can modify a card to fit an x1 slot (a couple times).
But few people are willing to do that so it's kind of moot.So that leaves you with the riser option.
-
It's not itx that's what I said in my earlier post, you have to buy a bigger board.
The x16 isn't a factor in NICs, no dual or quad modern Ethernet nics are x16, they are x4. And a quad port gigabit nic only needs x1 pcie v2.0 to Max out it's sued.
Agreed. My earlier post said the same thing that I have only found x4 LAN cards. Question is why do they make them in x4, if all they require is x1 speed?
-
I think because they only require x1 speeds for PCIe v2.0+.
Someone with an x1 PCIe v1.x slot would need all 4 Lanes.
Back in the day the end of the slots on mobos were open, allowing the user to make an intelligent selection.
These days they dummy proof them and in the process inconvenience the <1% who would care. -
I think because they only require x1 speeds for PCIe v2.0+.
Someone with an x1 PCIe v1.x slot would need all 4 Lanes.
Back in the day the end of the slots on mobos were open, allowing the user to make an intelligent selection.
These days they dummy proof them and in the process inconvenience the <1% who would care.Fair enough.
And that's why it's important to also find out the version of the PCI that the motherboard provides/supports.
-
Interesting read, thanks!
Just found this as far as PCIe throughput is concerned.
In my days we had fights between PCI and AGP after ISA slots were abandoned. Well, think I'm getting old. -
Cool, so there are open ended x1 slots, which may fit bigger cards without having to file away anything. Why did they not do this from the get go? Would have saved so much headache.
-
They used to do that. You can find it on old Mobo standards.
They've since stopped doing it (afaik).
I think they stopped because people were then putting things like video cards into open ended slots that couldn't handle the required bandwidths and then assumed the Mobo was bad when it didn't work.
So now they kiddie proof them. -
They used to do that. You can find it on old Mobo standards.
They've since stopped doing it (afaik).
I think they stopped because people were then putting things like video cards into open ended slots that couldn't handle the required bandwidths and then assumed the Mobo was bad when it didn't work.
So now they kiddie proof them.making it inconvenient for us. :(
-
Haha yup, but there's always the option to open it yourself or trim the card - generally speaking you're better off modifying whichever of the two devices is easier to replace.
Then there's riser cards, but those can make it difficult to fit the card into very small cases. 1U cases often work well with risers though.
For pfSense though and the dilineation between j3355 and j3455 people are probably generally better off with the 3355.
It will be better for ooenvpn and will handle GbE routing and firewalling with ease.
The 3455 will mostly shine for a user that needs more significant throughput with Suricata and has either modest or no need of openvpn.
-
I think because they only require x1 speeds for PCIe v2.0+.
Someone with an x1 PCIe v1.x slot would need all 4 Lanes.
That, and because server motherboards basically never come with a x1 slot so for their target market it's a non-issue. Also, by using all 4 lanes of PCIe 2, they theoretically get better performance to/from the buffers on the card. May matter in the target market, but much less relevant for a firewall and completely irrelevant for a low power home firewall.
-
What option should I go for bearing in mind cost, internet speed up to 200mbps, and openvpn usage and the need for AES-NI.
APU2C4 would be nice if there will be no other needs, offered services, enabled functions or installed packets.
The Qotom box will be also nice but also with more horse power for installing and running more packets and services. -
Question is why do they make them in x4, if all they require is x1 speed?
It's the differences between PCIe 1.0 vs 2.0 or 3.0. The quad port server NICs that many of us use or recommend absolutely do require 4 lanes of PCIe 1.0 bandwidth to function with full performance.
GPU miners use riser cables all the time, and that works because the x16 GPUs can get away with x1 bandwidth because they're doing massive compute operations on relatively small chunks of data, meaning that the bandwith of the interface isn't a problem. It is a problem with something like an HBA or a NIC, though, especially when that NIC uses only PCIe 1.0.