Internet uplink cut in half by pfSense
-
Hi All,
I have pfSense running with 2 cores, 4Gb mem, 150Gb SSD. WAN connected at 10Gb switch and Internet uplink is 1Gb/s.
I have several webserver running behind pfSense. When I have the webserver connected to WAN directly everything is fine. Downloading from the webserver to a host in Azure go's with 115 MB/s avarage.
When I access the webserver through pfSense and i prerform the same downlowd on the same Azure host i reach a maximum throughput of 50.7 MB/s which is a bit disappointing.
(I've tested the download speed with a 1000mb BIN file and downloading it with wget on the Azure host)
115 MB/s vs 50.7MB/s is curring speed in half and not really acceptable for a fresh install. During monitoring of the resources pfSense does not show any stress on the CPU or memory.
Note; Squid is used as reverse proxy for the webserver. Of-course I have tested the speed by bypassing the proxy small 10MB/s improvement but still not acceptable with a baseline of 115 MB/s.
Anyone who has the magical touch or tweak that could speedup pfSense? (I've done everything by the book)
Thanks!
-
Hi All,
I have pfSense running with 2 cores, 4Gb mem, 150Gb SSD. WAN connected at 10Gb switch and Internet uplink is 1Gb/s.
I have several webserver running behind pfSense. When I have the webserver connected to WAN directly everything is fine. Downloading from the webserver to a host in Azure go's with 115 MB/s avarage.
When I access the webserver through pfSense and i prerform the same downlowd on the same Azure host i reach a maximum throughput of 50.7 MB/s which is a bit disappointing.
(I've tested the download speed with a 1000mb BIN file and downloading it with wget on the Azure host)
115 MB/s vs 50.7MB/s is curring speed in half and not really acceptable for a fresh install. During monitoring of the resources pfSense does not show any stress on the CPU or memory.
Note; Squid is used as reverse proxy for the webserver. Of-course I have tested the speed by bypassing the proxy small 10MB/s improvement but still not acceptable with a baseline of 115 MB/s.
Anyone who has the magical touch or tweak that could speedup pfSense? (I've done everything by the book)
Thanks!
Maybe enable fast-forwarding? Try enabling/disabling NIC off-loading.
Is pfSense running in a VM?
Exactly how much CPU is being used?
What are your CPU specs?
What NICs?
What is your network topology?
Please share any other useful details about your setup.
-
Tried fastfoward, gave me 20% extra from 30MB/s to 50MB/s. Offloading is disabled by default, not enabled yet for testing .
Yes it is running is a VM, its running at a hosting company (which has pfSense in their OS library)
CPU Type Westmere E56xx/L56xx/X56xx (Nehalem-C) 2 CPUs: 2 package(s) x 1 core(s)
Load average 0.10, 0.04, 0.01
Interfaces. 2x 10Gbase full duplex. 1x wan 1x manBehind the pfsense are 3 webservers on the LAN.
I do not want to brag, I'm a infrastructure specialist for over 10years. I know my way around. This is the first encounter with something I cannot fix myself apparently.
-
Tried fastfoward, gave me 20% extra from 30MB/s to 50MB/s. Offloading is disabled by default, not enabled yet for testing .
Yes it is running is a VM, its running at a hosting company (which has pfSense in their OS library)
CPU Type Westmere E56xx/L56xx/X56xx (Nehalem-C) 2 CPUs: 2 package(s) x 1 core(s)
Load average 0.10, 0.04, 0.01
Interfaces. 2x 10Gbase full duplex. 1x wan 1x manBehind the pfsense are 3 webservers on the LAN.
I do not want to brag, I'm a infrastructure specialist for over 10years. I know my way around. This is the first encounter with something I cannot fix myself apparently.
If I were to guess, I would say the slowness is VM-related (NIC drivers?). Search the virtualization sub-forum.
Edit: Actually, it kinda seems like your throughput is CPU-bound since fast-forward made such an impact… but you say the CPU never gets near 100%?
-
When I test the connection through Squid Reverse Proxy I see the squid process going up max 25%.
The CPU itself never reaches 100% also with tests that passed the reverse proxy.I doubt that its purely nic related.. But that is an assumption
-
Anyone who might have an idea?
-
post more details about the hypervisor. freebsd isn't know for its great performance on all hypervisors … esxi is the one with the least ammount of issues, others can be a bit of a pain
-
FreeBSD 11 has gotten some big VM love. May get better performance with 2.4.
-
Are the hosts actually connected with 10Gb links? Several hypervisors including Hyper-V report 10Gb nics on the guest-os which doesn't necessarily mean they actually have a 10Gb link. Having said that, I am currently planning a datacenter move. Our main datacenter has a few HP DL360 G9 machines, each with 2 Xeon 2950V3 CPU's, some pretty neat and fast 10core machines, so a total of 20 cores (for what that matters). We are running pfSense (on Hyper-V) on those boxes, and it never passed 60MB/sec, about 500Mbps.
In our new datacenter though I've mounted a spare Cisco 3750 switch and a Xeon L5520 based machine, so even one generation before your L56xx host. With the exact same pfSense setup on Hyper-V as well this much older machine reaches linespeed and I can do a 115MBps, so about 1Gbps down and up. Note that I am running pfSense on Hyper-V 2012R2, which isn't even the best hypervisor for it at all. Now this difference might be the line in our current datacenter, but it might as well be some other issue. In any case, the rather similar L5520 based machine can actually do 1Gbps routing with no issues at all in my setup. No specific tweaks done at all.