Sustained Throughput Question
-
I'm running mostly virtual boxes on ESXi 5.0 (some 4.0 and some now MSFT Hyper-V).
I have all decent NICs and decent switches. My developer is convinced that the firewall (2.0 Release) is to blame with throughput not being at wire speeds/pipe capacities. Most everything runs through the (virtual) Pf box - but I've never once seen 41% CPU, 9% memory and from the date of install it's been 17% disk usage.
If I run VM >> 10G vNIC >> ESXi 5.0 HOST >> Intel Pro 1000 PT NIC >> Dell 2824 Switch >> Intel Pro 1000 PT NIC >> ESXi 5.0 Host >> 10G vNIC >> PfSense >> 10G vNIC >> ESXi 5.0 Host >> Intel Pro 1000 PT NIC >> Dell 2824 Switch >> Intel Pro 1000 PT NIC >> ESXi 5.0 (or 4.0) Host >>10G vNIC >> VM
(cliffs: run direct multiple robocopy jobs between one VM to another - but VM to switch to PFbox to switch to VM)
I'm convinced (and OK with it) that the issue is switch traffic and NIC throughput in general.
Developer is crapping on PfSense saying "firewall is slowing down the system".
We are getting ~700 megabit sustained and I think that's fine, hardware such as switches and NICs are the bottleneck not Pf slowing it down. Pf has never (ever) been over the usage numbers above.
Feedback appreciated.
Cheers.
-
One thing I would check is that it looks like you are taking those numbers from the dashboard. This can be misleading if you have assigned several cores to your pfSense VM. For example if you have have a 4 core VM the dashboard may show 41% but one of those cores may be pinned at 100% (the core that's running the pf process) limiting throughput.
To see individual core use use top -SH. Try to avoid having the dashboard up anywhere when you are doing this as that uses some CPU cycles, a significant number on my underpowered machine.Steve
-
"run direct multiple robocopy jobs between one VM to another - but VM to switch to PFbox to switch to VM"
first question: are the files used for this robocopy test large? bigger the better i've found for really pushing your gear. are you sure your disks can do > 88MB/s? read and write?
second question: when you say vm to switch to PF box to switch to vm - is this one vlan to another (so passing through the PF via an acl or some other 'route')?
If not, and the VMs are on the same vlan/subnet: to rule out the PF how about going from 1 vm (on host A) to another VM on host B - this would be: host hardware-switch host. so still exiting your host and going to a physical switch, and back up the network stack in the 2nd host. This would eliminate the PF from the path.
If you are going between subnets/routing, and if your switch supports L3 routing, give it an IP on your vm's subnet. edit your vm's routing table, set the gateway for the other VM's subnet to use your switch instead of the default gateway (PF) with no acl, just straight open route. how is that speed?