@voodooutt
Actually, I worked it out for myself and literally just came back here to post the results.
#1: Apparently some versions of FreeBSD in a VM do not like OVMF & Q35 machine types. I used seabios and i440fx instead. I set OS type to "other" instead of linux since there's not a specific option for BSD on proxmox. I ran across some stuff in the freebsd forums talking about 11.1 and other releases of 11.x having various issues under OVMF/Q35 VMs, so I figured I'd stack the deck in my favor.
#2: I added "blacklist intel" to blacklist.conf to completely disable the NIC in proxmox. BEWARE! This would cause nine kinds of trouble in a server with multiple intel devices! My server has 4x onboard nics, a 10gbe dual port mezzanine card, and nearly everything else is an intel product as well. That addition to blacklist would pretty much render it a boat anchor. The hardware hosting pfsense is an AMD 3800x on a gigabyte board with three different brands of NICs in it: Realtek gigabit onboard (almost worthless, IMO), the dual port pro 1000 intel card passed through to pfsense, and a solarflare dual port 10gbe card. This step is not strictly necessary because it does work without it, but I wanted the LEDs on the NIC and switch off unless pfsense was up and active for troubleshooting purposes.
#3: Pass the NIC though to pfsense as normal. Since the machine type is i440fx, the PCIe checkbox is greyed out.
After I completed that, it came right up and I was able to get bare metal equivalent throughput and cpu load while hammering the snot out of it. I was getting absolutely terrible throughput with every other method I tried but this is slightly faster than my current pfsense installation running on bare metal.
speedtest.jpg cpuload.jpg