Performance with 10 GbE NICs
-
Is this on a physical box or a VM? Have you done any tuning?
pfsense running on VM. i did heavy tuning on pfsense. currently i can't post configuration because i moved from pfsense to rhel based firewall (pfsense is not stable enough for my production and i need higher throughput). i have configuration stored in office. I'll try to post it at sunday
-
You are using Squid and Squidguard.
It might be more beneficial to have inflight tuning enabled.net.inet.tcp.sendbuf_auto = 1
net.inet.tcp.recvbuf_auto = 1
net.inet.tcp.slowstart_flightsize = 64Squid optimize: It would be more beneficial to increase the slow-start flightsize via the net.inet.tcp.slowstart_flightsize sysctl rather than disable delayed acks. (default = 1, –> 64) 262144/1460=maxnumber (where 1460 = the mssdflt, typical for a 1500MTU)
-
I am seeing 5,3gbit/s running a very heavy rule set in Snort.
No problemos. Running ESXi :)
-
I am seeing 5,3gbit/s running a very heavy rule set in Snort.
No problemos. Running ESXi :)
I know I've asked this before, but I can't remember if I got an answer, but are you using VMXNET3 virtual NICs or have you used VT-d to pass through the physical Intel ports to the VM? If the latter, it shouldn't be any different from what we're doing.
-
I use vxnet3
-
No passthrough and using the Intel driver supplied.
-
The problem I have with running ESXi is the number of vLANs I have behind the firewall. I am pushing ~30 vLANs and the hard limit on vNICs on the latest version of ESXi is 10. 1 vLAN = 1 vNIC on the virtual host. 30 vLANs would mean I would need 30 vNICs and that is not possible.
It is a good idea and I may be able to apply this to lesser used firewalls that I have but I won't be able to use it to resolve my immediate issue without a significant redesign of the network.
I was really hoping someone would jump in and tell me that they are using NIC vendor x model number x and are not seeing these issues.
If I could easily get around the vLAN limitations mentioned above I would do it.
I am only using Squid/SquiGuard on one segment that is dedicated as a captive portal with our internal NAC solution. Once a system is registered they are off that segment. The throughput issues I am seeing are usually on a separate segment.
Please keep your ideas coming! I am at least getting some inventive ways to work around these issues in other locations.
-
i have 132 vlans. just transfer ALL vlans in to vm an pfsense will tag it
-
i have 132 vlans. just transfer ALL vlans in to vm an pfsense will tag it
wladikz
OK so you are saying to leave the 2 port LAGG in place and keep all of the vLANs tagged as they are now? Once ESXi is installed and the VM created I can load the backup XML and it will work? How do you have the vNICs configured to make that work?
I am willing to give this a try, I just can't wrap my head around how that vNIC is configured to make this work. I used to do VI administration so I do get it but that was back in VI 3.5 days. Sorry if I am being dense, I get how you can dedicate a physical interface to a VM but won't I have to go under Edit Settings > choose the vNIC > choose the Network Connection > and assign a label?
Would I keep the firewall configured with 2 vNICs in a LAGG as well? Again sorry if I am not getting this, I really appreciate your feedback.
-
i have 132 vlans. just transfer ALL vlans in to vm an pfsense will tag it
wladikz
OK so you are saying to leave the 2 port LAGG in place and keep all of the vLANs tagged as they are now? Once ESXi is installed and the VM created I can load the backup XML and it will work? How do you have the vNICs configured to make that work?
I am willing to give this a try, I just can't wrap my head around how that vNIC is configured to make this work. I used to do VI administration so I do get it but that was back in VI 3.5 days. Sorry if I am being dense, I get how you can dedicate a physical interface to a VM but won't I have to go under Edit Settings > choose the vNIC > choose the Network Connection > and assign a label?
Would I keep the firewall configured with 2 vNICs in a LAGG as well? Again sorry if I am not getting this, I really appreciate your feedback.
Set the Port Group in vSphere to be VLAN 4095 and that will enable trunking (VMWare calls this VGT) and allow you to set the VLAN from within the VM.
-
All-
Sorry for the delay in response. It took us a while to juggle our day to day duties and stand up an adequate test system. I have begun testing the virtual FW (both 2.1 and 2.03) and ran into an issue with VMXNET2 NICs. They do not accept VLAN tagging. I was able to test with the e1000 NICS but the performance was abysmal and VMXNET3 NICs are not recognized.
Now I have done my homework and I see that other folks have brought up this issue in the forums. I wanted to see what the folks specifically responding in this thread did to overcome the issue.
What virtual NICS are you using to get the speeds you mentioned and how did you achieve this?
-
It requires tuning. We recently setup an internal 10G test lab.
IJS…
-
@gonzopancho:
It requires tuning. We recently setup an internal 10G test lab.
IJS…
Yes, I have been in contact with a few people at ESF. I have documented the tuning steps that I have taken thus far on the physical host at the beginning of this thread. Thus far I have not been able to get it right. That is why I have gone to the forums.
Do you have any suggestions for the virtual firewall? The folks that responded in this thread appear to be getting the type of performance that I am trying to achieve on a virtual implementation. So far I am getting the same results on my bare metal test system. I have one more test to conduct and then I am going to reach out to ESF again and get their suggestions. Hopefully that LAB you mentioned will help!
Thanks!