Nve driver timeout issues.
I'm using pfsense 1.0.1 on a MSI K8N Neo4-F (amd64) motherboard that uses the nforce4 chipset. I'm using the onboard nic (nve0) on a PPPoE DSL connection and am seeing a lot of device timeouts in the system logs. Here's a snip from my pfsense system log:
<snip>Jan 4 16:48:47 check_reload_status: reloading filter
Jan 4 08:06:57 php: : DEVD Ethernet attached event for nve0
Jan 4 08:06:57 php: : DEVD Ethernet detached event for nve0
Jan 4 13:06:56 check_reload_status: rc.linkup starting
Jan 4 13:06:56 kernel: nve0: link state changed to UP
Jan 4 13:06:56 kernel: nve0: link state changed to DOWN
Jan 4 13:06:56 kernel: nve0: device timeout (2)
Jan 4 08:01:56 php: : DEVD Ethernet attached event for nve0
Jan 4 08:01:56 php: : DEVD Ethernet detached event for nve0
Jan 4 13:01:56 check_reload_status: rc.linkup starting
Jan 4 13:01:54 kernel: nve0: link state changed to UP
Jan 4 13:01:54 kernel: nve0: link state changed to DOWN
Jan 4 13:01:54 kernel: nve0: device timeout (1)
<snip>I've done a bit of google searches and found that I'm not the only one experiencing these issues. According to what I've read, it appears to be an issue with a watchdog within the nvidia code itself. In my travels, I found that there is a patch available (is it currently in pfsense?):
However, I also found that there's work in progress to port the OpenBSD nfe driver to FreeBSD, but it appears to only be available for 7-CURRENT and 6.2-PR:
Does anyone have any input as to how I should proceed to fix this issue or when updated drivers will be (or already have been) included in pfsense? I'm aware of the current "non-stable" builds of pfsense, but I'm wary of putting them on a production system.
Any input would be much appreciated.
My recommendation would be to toss that nvidia piece of garbage out the nearest airlock and get a real platform to work with. First step is to get real nic's like intel 1000BT desktop nic's. My second recommendation would be to toss the motherboard and get an intel chipset or amd chipset board to work with. When you're working with linux/freebsd/unix any deficiency in hardware design like weak nic's, buggy acpi, broken bios, becomes blatantly obvious. Every piece of nvidia (read non video) hardware I've touched has been broken someway some how. And when I say broken, I mean completely broken hardware design that will never be fixed because linux/freebsd folk don't tend to fix broken hardware issues in software unless it's a fix provided by the vendor (nvidia) and that's not happening.
The old nforce2's used to randomly lockup when you copied files between two hard drives and played an mp3 in linux and freebsd because the bus timer's were seriously broken. You'd get irq locking issues. They fixed this in software in the windows driver but the hardware stayed eternally broken. The nforce3's had crazy ACPI issues and the nforce4's would just randomly lockup due to mysterious NIC/bus issues.
My recommendation would be to stay far far away from any nvidia chipset unless it's a windows only box. And even then, make triple backups of your data because the built in nvidia raid has a tendency to trash your data faster than you can install windows at times. (yes more defective junky nvidia board design). Basically, I've never seen an nvidia board work properly with linux/unix without something on it being broken. I would just find another platform to work with.