New build for 1G speeds
-
It's not chipset issue, it's driver issue. On PPPoE interface packets are only received on one NIC driver queue by the igb driver. This has been discussed many times. It will use only one core of the cpu.
https://lists.freebsd.org/pipermail/freebsd-bugs/2015-October/064334.html
Igb drivers are better than em in many aspects, except this. It's worth dropping in an em-based card for the interface dealing with PPPoE (typically WAN). -
It's not chipset issue, it's driver issue. On PPPoE interface packets are only received on one NIC driver queue by the igb driver. This has been discussed many times. It will use only one core of the cpu.
https://lists.freebsd.org/pipermail/freebsd-bugs/2015-October/064334.htmlI dont use PPPoE. What I was referring to is the EM driver will fail at high interrupts/load for many retail chipsets until the machine is rebooted. It is related to this error:
https://forum.pfsense.org/index.php?topic=110224.0
All of my testing has been done using a 64-bit machine, so i386 is not the issue.
-
I was just noting this for the one who started this thread. He/She may be using PPPoE, and should know about this issue.
-
I was just noting this for the one who started this thread. He/She may be using PPPoE, and should know about this issue.
Fair enough. Good information to know!
-
I was just noting this for the one who started this thread. He/She may be using PPPoE, and should know about this issue.
this information is appreciated. My isp doesn't use PPPoE so it seems I'm safe from this issue and should stick with IGP in this particular case.
-
For what it's worth, I'm running six pfsense machines in production and all but one of them use the em driver. I've never had any issues. I've always considered it well supported and very stable.
-
For what it's worth, I'm running six pfsense machines in production and all but one of them use the em driver. I've never had any issues. I've always considered it well supported and very stable.
What intel ethernet chipset(s)?
I had issues with my onboard 82574 chipsets when pushing 1gig with snort and pfBlockerNG enabled with DNSBL
-
For what it's worth, I'm running six pfsense machines in production and all but one of them use the em driver. I've never had any issues. I've always considered it well supported and very stable.
What intel ethernet chipset(s)?
I had issues with my onboard 82574 chipsets when pushing 1gig with snort and pfBlockerNG enabled with DNSBL
Most of them are virtual running on ESXi. The one that is not is using the 82571EB chipset.
Edit: To clarify, four VMs are using the em driver. One physical machine is using the em driver with the 82571EB chipset. The other physical machine uses the bce driver.
-
For what it's worth, I'm running six pfsense machines in production and all but one of them use the em driver. I've never had any issues. I've always considered it well supported and very stable.
What intel ethernet chipset(s)?
I had issues with my onboard 82574 chipsets when pushing 1gig with snort and pfBlockerNG enabled with DNSBL
Most of them are virtual running on ESXi. The one that is not is using the 82571EB chipset.
Edit: To clarify, four VMs are using the em driver. One physical machine is using the em driver with the 82571EB chipset. The other physical machine uses the bce driver.
Thanks. I was going to try the VM route as my research showed that it would fix the issues I was having, but it was easier for me to just upgrade to the i350
-
If you want to go rackmount, why not pick up a used HP Proliant DL360 G7 (1U) on ebay? You can get a pretty loaded one for around $200 and they're beasts. I have one (way over kill) running pfSense at home and it has 2x Xeon X5650 Procs (hex core, 2.56Ghz for a total of 24 CPUs with HT), 48GB RAM, and 4 built in GB NICs. I have 72GB 15K SAS HDDs in raid 5. I don't even get close to taxing it.