Tuning pfsense for lowest possible latency?
Hi everyone, i recently setup a pfsense box and its been great so far. I have been messing with buffer bloat for awhile and trying to get rid of as much of it as possible and tune and adapt my computer tcp/ip as well as my network equipment for lowest possible latency. So far i have managed to get the latency on my computers side to be next to non existent. I am aware that removing buffers can reduce network throughput however 90% of what i do i want fastest possible response not high bandwidth. I only have a 25/25 line and so far i have had no issues maxing it out.
I have done some searching and looking around and so far i see lots of possible tweaks for system performance or cpu use. I am more interested if there is anything i can tweak or change to ensure minimum possible latency for the whole network? i would rather stay away from using QOS to boost anything specific i simply want as little buffer as possible i am trying to make sure the only thing that affects my latency is the line to the outside world itself.
Any help or information would be greatly appreciated thanks in advanced.
Nothing? im just trying to figure out which tuneables are for buffer and would greatly appreciate any help. I just want buffers to be next to non exsistant i used to have this with my old ddwrt router it was amazing even with both upload and download maxed out entirely i would get an average of 250ms latency. But with pfsense under that same test i am getting around 690 so clearly there is some buffering going on here and i would really like to know how to get rid of it.
I dont know too much about bsd but looking at the settings it looks like kern.ipc.maxsockbuf net.inet.tcp.sendspace and net.inet.tcp.recvspace are buffer settings? please any help would be great i would rather not go back to DD-WRt as i love PFsense so far but if i cant get this buffer latency down i will have to go back as i need my latency to be as low as possible.
stan-qaz last edited by
I'm no expert on this but you seem to have something very wrong with your system. I haven't done any system level tweaking on my pfSense 2.1 box that is a fairly old Core 2 Duo 8400 3.0 GHz box with Intel(R) PRO/1000 Ethernet cards. I'm seeing responses from pfSense and my first gateway many times faster that what you are reporting.
1 pfsense.home (172.16.0.1) 0.058 ms 0.082 ms 0.051 ms
2 10.48.32.1 (10.48.32.1) 10.103 ms 10.099 ms 11.672 ms (ISP's first address)
Connections to external hosts are also much faster.
stan@t310:~> ping google.com PING google.com (22.214.171.124) 56(84) bytes of data. 64 bytes from lax02s20-in-f2.1e100.net (126.96.36.199): icmp_seq=1 ttl=54 time=20.2 ms 64 bytes from lax02s20-in-f2.1e100.net (188.8.131.52): icmp_seq=2 ttl=54 time=22.9 ms ^C t310:/home/stan # traceroute google.com traceroute to google.com (184.108.40.206), 30 hops max, 40 byte packets using UDP 1 pfsense.home (172.16.0.1) 0.058 ms 0.082 ms 0.051 ms 2 10.48.32.1 (10.48.32.1) 10.103 ms 10.099 ms 11.672 ms 3 172.21.0.204 (172.21.0.204) 11.897 ms 11.727 ms 10.681 ms 4 chndcorc01-pos--0-3-0-0.ph.ph.cox.net (220.127.116.11) 11.787 ms 14.589 ms 14.354 ms 5 18.104.22.168 (22.214.171.124) 8.881 ms 12.359 ms 11.285 ms 6 126.96.36.199 (188.8.131.52) 22.006 ms * * 7 184.108.40.206 (220.127.116.11) 23.400 ms 23.956 ms 24.134 ms 8 18.104.22.168 (22.214.171.124) 22.938 ms 21.264 ms 25.455 ms 9 126.96.36.199 (188.8.131.52) 26.361 ms 25.362 ms 25.009 ms 10 lax02s20-in-f8.1e100.net (184.108.40.206) 27.975 ms 25.696 ms 25.709 ms
With 690 ms latency you may have something very wrong, I'd do a clean install with no tweaks and see what you are getting from it. If you are still seeing really slow responses with a default setup you may have a hardware issue of some sort.
timthetortoise last edited by
See how kern.ipc.nmbclusters="32768" treats you.
iamkrillin last edited by
I'll echo what stan has mentioned. I have done no tweaking of system tunables or any such thing. I run on a fairly high bandwidth wan connection (100/50) and see latency well under 20 ms at nearly all times. Usually 10-12 ms
I think you are all misunderstanding what i am saying. im not getting that latency at all times or normal its when BOTH upload and download are COMPLETELY maxed out. (in my case 3MB/s in both directions) and once your maxed out what you WANT for latency is for the router to just drop packets so that tcp/ip backs off. but pfsense is clearly showing signs of having a buffer in it and im trying to figure out how to remove or at least reduce the buffer that 600ms is a result of the buffer aka i have 600ms worth of buffer.
See how kern.ipc.nmbclusters="32768" treats you.
correct me if i am wrong but this settings is INCREASING buffer is it not? that is the exact opposite of my goal. I am trying to remove buffer bloat here the DD-WRT router let me set the max tcp buffer size in packets which i had set to 2 packets (so basically non existent) i want something similar on pfsense i want it to buffer next to nothing if the line is maxed i want it to drop packets like it should be doing in the first place. For anyone familiar with DD-WRT the setting was TX Queue Length so if pfsense has something similar i would love to know about it.
stan-qaz last edited by
I'm seeing a slight increase in ping times across the net, to the mid 200 ms range during speed tests that max out both upload and download. Ping times to my gateway run over a bit wider range during the tests, 6.6 ms to 94.5 ms. That is on my Cox Cable connection that is running at about 32 mbps up and 11 mbps down which should be stressing my pfsense box a lot more than your connection does yours. The pfsense cpu use peaked at 19% during the test but was usually closer to 10%.
I'll wait to see what others have to suggest, it will be interesting to see if tweaking the buffers makes a difference if someone knows how to do that.
speedtest is a bad way to test this since it only maxes out one side at a time, the ideal situation is to have a server of your own (i have one in the town next to me that can go 1Gbps) and use that to max out both your upload and download simultaneously. depending on when you look at your ping you will be seeing either your download buffer or your upload buffer. If you are really interested in this then go listen to http://aolradio.podcast.aol.com/sn/sn0345.mp3 that podcast (security now) and skip to 57:50 where they start explaining what buffer bloat is and why its bad.
nothing? dont suppose its too much to ask to get an admin to reply on the subject lol. Would really love some answers on this one.
Tillebeck last edited by
If you used yhe wizard for your traffic shaping then you will probably find that this is enabled on all queues:
"Explicit Congestion Notification"
It seems that if you set your limits a bit lower than your actual throughput and uncheck the "Explicit Congestion Notification" then you can get more throughput and probably better response times during high load (on the cost of more packet loss) since some buffering will be disabled.
I disabled the "Explicit Congestion Notification" since when checked it lowered the throughput with about 40% compared to when I was unchecked.
It was this page that suggested that "Explicit Congestion Notification" could cost bandwith though doing a bit of buffering and signaling packet loss without loosing packets. If equipment on both ends are not set up properlyhttp://www.openbsd.org/faq/pf/queueing.html#ecn
Be very careful when enabling ECN on your machines. Remember that any router or ECN enabled device can notify both the client and server to slow the connection down. If a machine in your path is configured to send ECN when their congestion is low then your connections speed will suffer greatly. For example, telling clients to slow their connections when the link is 90% saturated would be reasonable. The connection would have a 10% safety buffer instead of dropping packets. Some routers are configured incorrectly and will send ECN when they are only 10%-50% utilized. This means your throughput speeds will be painfully low even though there is plenty of base bandwidth available. Truthfully, we do not use ECN or RED due to the ability of routers, misconfigured or not, to abuse congestion notification.
I'm sure others will appreciate your findings if you put in the time doing some testing.
Have you read this: https://wiki.freebsd.org/NetworkPerformanceTuning
Enabling IP fastforwarding can have a significant impact on throughput, I wasn't looking at latency, but it kills ipsec if you're using it.