Just a follow up here in case anything in the future experiences issues like this.
I was able to resolve this by changing two settings. First, I had to set the ESXi host power management to High Performance, this was defaulted to Balanced. This stopped the clock throttling on the CPU, which in my case is a very low power 10W SOC, I did not notice an increase in power usage but it helped to stabilize the responsiveness of the ESXi host on this low powered system.
The second item that I changed was to edit the pfSense VM, go to advanced options, and set the latency sensitivity to High. by default, the latency sensitivity is set to Normal. A High setting means that the CPU and memory need to be reserved for the VM to power on, so ensure that your host has enough resources to support this.
Changing the two settings above resolved the latency differences that I was seeing. I am now getting identical latency readings between bare metal and a virtualized pfSense VM with NIC passthrough.
I tested this method with 2.4.4p2, 2.4.4p3, and current versions of the 2.5 development images. All experienced the same successful outcome in terms of reducing latency to that of a bare metal system.