Playing with fq_codel in 2.4
-
I believe I have everything setup per post #120 and beyond. My output for ipfw sched show looks correct, shellcmd is all set, router rebooted. I'm testing on a network where no other traffic is going on except my computer. My buffer bloat for downloading on dlsreports has improved greatly (300+ down to 51ms avg) but I can't seem to get rid of the bufferbloat for the upload (value slides between 300-1000ms depending on the bandwidth value I select for the limiter). My connection is slow by comparison to most of the folks I've seen in thread (15 down / 1 up) - I'm just wondering if I will ever be able to completely dial out the bufferbloat on a slow link like mine or do I just need to keep experimenting with different bandwidth values.
You might have some trouble just getting it to work. I can't find where I read it but the codel algorithm has trouble working on connections at or below 1Mbps because the transmission time of an MTU sized frame is too close to the delay that codel uses for its inner-workings.
In this link they talk about using codel with what they call really low speeds. I don't know what any of it means so good luck.
https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/#tuning-codel-for-circumstances-it-wasn-t-designed-forActually this article does help quite a bit. There was also mention of adjusting some of these tuneables by the OP; I just didn't realize how critical it was until now. Here is the list:
net.inet.ip.dummynet.fqcodel.limit: 10240 net.inet.ip.dummynet.fqcodel.flows: 1024 net.inet.ip.dummynet.fqcodel.quantum: 1514 net.inet.ip.dummynet.fqcodel.interval: 100000 net.inet.ip.dummynet.fqcodel.target: 5000
The article you linked suggests the following for a slow connection like mine:
net.inet.ip.dummynet.fqcodel.limit < 1000
net.inet.ip.dummynet.fqcodel.quantum < 300
ECN OFFIs the right way to make these adjustments by adding/modifying them in the GUI (System->Advanced->System Tunables)?
-
Your target should also be at least 1.5x how long it takes to send an MTU amount of data at your bandwidth. Cake does this automatically as they found general limit works well.
The reason for this is you don't want a single MTU sized packet to trip the drop logic.
-
Your target should also be at least 1.5x how long it takes to send an MTU amount of data at your bandwidth. Cake does this automatically as they found general limit works well.
The reason for this is you don't want a single MTU sized packet to trip the drop logic.
What is the correct way to set these targets?
-
I assume the "net.inet.ip.dummynet.fqcodel" settings you mentioned in your just prior post.
-
Your target should also be at least 1.5x how long it takes to send an MTU amount of data at your bandwidth. Cake does this automatically as they found general limit works well.
The reason for this is you don't want a single MTU sized packet to trip the drop logic.
What is the correct way to set these targets?
So I experimented with this a little bit, and changing the values below either in the System Tunables section or by editing loader.conf.local didn't seem to work because these fq_codel defaults don't appear to show up until fq_codel is actually loaded. So if you are loading fq_codel using shellcmd this won't really work.
net.inet.ip.dummynet.fqcodel.limit: 10240 net.inet.ip.dummynet.fqcodel.flows: 1024 net.inet.ip.dummynet.fqcodel.quantum: 1514 net.inet.ip.dummynet.fqcodel.interval: 100000 net.inet.ip.dummynet.fqcodel.target: 5000
You can make it work, but you would have to load fq_codel using ipfw so the defaults show up, change the default values to what you want them to be using sysctl and then load fq_codel again with the new values.
However, I think there is an easier way as you can change these values just by editing the ipfw command that loads fq_codel.
From a link posted earlier in this thread:
http://caia.swin.edu.au/freebsd/aqm/patches/README-0.2.1.txt
From section 3, the syntax for loading fq_codel via ipfw is:
ipfw sched x config [...] type fq_codel [target t] [interval t] [ecn | noecn] [quantum n] [limit n] [flows n] where t is time in seconds (s), milliseconds (ms) or microseconds (us). The default interpretation is milliseconds. n is an integer number
The command to load fq_codel with the defaults (from earlier in the thread):
ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel
Using similar syntax as above, in your case all you would have to change is:
ipfw sched 1 config pipe 1 type fq_codel target 15 noecn quantum 300 limit 600 && ipfw sched 2 config pipe 2 type fq_codel target 15 noecn quantum 300 limit 600
This does the following:
Set target to 15ms (you'll need higher than default of 5ms given the low upload on your 15/1 connection)
Turn off ECN
Set fq_codel quantum to 300
Set queue size limit to 600 packets(feel free to tweak these values as necessary)
Now of course, this all assumes that you have already created two limiters and queues underneath them.
You can issue the command above from the command line and make it persistent between reboots using shellcmd.
Hope this helps.
-
I also have a question for everyone following this thread:
The command to enable fq_codel that has been shown in this thread to be:
ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel
However, looking at the examples in 3.3 here:
http://caia.swin.edu.au/freebsd/aqm/patches/README-0.2.1.txtDoesn't the command have to be appended to include setting the the scheduler on the queue, i.e.:
ipfw sched 1 config pipe 1 type fq_codel && ipfw queue 1 config sched 1 && ipfw sched 2 config pipe 2 type fq_codel ipfw queue 2 config sched 2
The above assumes only two queues, one under each limiter.
Thanks in advance for the help.
-
I also have a question for everyone following this thread:
The command to enable fq_codel that has been shown in this thread to be:
ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel
However, looking at the examples in 3.3 here:
http://caia.swin.edu.au/freebsd/aqm/patches/README-0.2.1.txtDoesn't the command have to be appended to include setting the the scheduler on the queue, i.e.:
ipfw sched 1 config pipe 1 type fq_codel && ipfw queue 1 config sched 1 && ipfw sched 2 config pipe 2 type fq_codel ipfw queue 2 config sched 2
The above assumes only two queues, one under each limiter.
Thanks in advance for the help.
I have used second, official variant, but have also found that first variant also works just fine if you have already configured limiters via GUI.
-
I am trying to set this up on an SG-3100 with an asymmetrical gigabit connection. I setup two queues and am configuring them as follows:
ipfw pipe 1 config bw 920Mb ipfw sched 1 config pipe 1 type fq_codel ipfw queue 1 config sched 1 mask dst-ip6 /128 dst-ip 0xffffffff ipfw pipe 2 config bw 40232Kb ipfw sched 2 config pipe 2 type fq_codel ipfw queue 2 config sched 2 mask src-ip6 /128 src-ip 0xffffffff
On the firewall side I am using a floating rule to apply the queues. This all works, with the floating rule enabled I get A / A+ on the DSL Reports speed test, whereas without the rule I get a C at best for bufferbloat. The issue I am having is with the rule enabled, I get at most ~650Mb/s bandwidth downstream, I have no problem hitting 940 with the rule disabled. Am I running into a CPU limitation of the SG-3100? Is there anything I can try to tweak?
-
I am trying to set this up on an SG-3100 with an asymmetrical gigabit connection. I setup two queues and am configuring them as follows:
ipfw pipe 1 config bw 920Mb ipfw sched 1 config pipe 1 type fq_codel ipfw queue 1 config sched 1 mask dst-ip6 /128 dst-ip 0xffffffff ipfw pipe 2 config bw 40232Kb ipfw sched 2 config pipe 2 type fq_codel ipfw queue 2 config sched 2 mask src-ip6 /128 src-ip 0xffffffff
I'd love to have someone else chime in, but the way I "think" the masks work is that you are sharing bandwidth between source and destinations based on the "mask src/mask dst" rules going on. So that causes some dynamic queues to be setup and it'll share out bandwidth amongst those queues so assuming you have some other traffic going on even if it's only a little, it will limit the bandwidth out.
Maybe try without having the src and dst masks setup and just leave those blank and see how your results look. Also, you can check the CPU % or something? i'm not familiar with that on that particular to device to tell you how, but that would validate if you are having a CPU bottleneck instead.
On the firewall side I am using a floating rule to apply the queues. This all works, with the floating rule enabled I get A / A+ on the DSL Reports speed test, whereas without the rule I get a C at best for bufferbloat. The issue I am having is with the rule enabled, I get at most ~650Mb/s bandwidth downstream, I have no problem hitting 940 with the rule disabled. Am I running into a CPU limitation of the SG-3100? Is there anything I can try to tweak?
-
I tried leaving the masks off and that didn't seem to make a difference. Then I tried disconnecting everything from the firewall except the laptop that I am using to test with, so that I could ensure nothing else was on the network. That didn't make a difference either. When the limiters are not enabled I can run a sustained 60 second download test and hit gig speeds 95% of the time, other than occasional dips. When I enable the limiters I can't get over the 650ish mark.
I can run top from an SSH session and it looks to be pegging one of the cores both with and without fq_codel so I can't really tell one way or the other there.
-
If you have symmetrical gig gig, why are you setting 40232Kb on pipe2
So 40mb? Whats the amount of traffic for acks to hit 650 vs 940? Have to do the math, etc. But limiting your upload could have an effect on your max download.. Shoot 500mbps down would be like 8mbps or something up just in acks..
If your on gig/gig why would you want to limit your upload so much?
Not a qos guy, this was my real first test of any sort of qos in pfsense.. But I got asked to take a look, that is what jumps out at me to look odd.. In a few days when my 4860 is online I would be able to test your settings. But not going to be ready until the weekend for sure.. But will be able try and duplicate the problem next week say.
-
John thanks, it's asymmetrical, not symmetrical. :) The connection maxes out at about 940/42.
-
What's the CPU as well? I couldn't drive shaping until I did some upgrades. I'm overpowered now with an:
Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz (4 cores)
But I wanted to future proof a bit. I can drive gig both directions at the same time without breaking my CPU atm.
-
It's a dual core ARM v7 Cortex-A9 @ 1.6 GHz with NEON SIMD and FPU. I think I am either hitting some odd issue with traffic shaping on ARM architecture or a CPU limitation.
-
It's a dual core ARM v7 Cortex-A9 @ 1.6 GHz with NEON SIMD and FPU. I think I am either hitting some odd issue with traffic shaping on ARM architecture or a CPU limitation.
1) Do you run any type of IPS/IDS currently (e.g. Snort)? If so, try disabling it temporarily to see if that helps.
2) What's the output of "ipfw sched show"? I think it would be good to doublecheck your fq_codel parameters.
3) As alternative you can try ALTQ traffic shaping (instead of dummynet) to see if you experience similar limitations. Instead of using limiters, try setting up the ALTQ FAIRQ scheduler and then enable Codel in the default queue. You'll probably also need to increase the default queue size from 50 to something larger like 512 or 1024. FAIRQ + Codel is not quite the same as fq_codel, but it it's similar (I have used both and the performance in my case was comparable). If you try that, do you see a similar speed limitation that you currently see with dummynet/fq_codel?Hope this helps.
-
Previously when I last tried this on 2.4.0 dev it at least functioned.
Now trying again on 2.4.2 release it just wont work.
Following the instructions to the latter and even using my backed up config for this which previously functioned, results in all outbound connections timing out as if blocked.
If I set the in/out pipes to use the child queues, connections will work briefly but then timeout after a few seconds.
If I set the in/out pipes to none, everything works but of course the shaper isnt been used.Limiter info page shows the correct information as does ipfw pipe show and ipfw sched show. The issue seems to be PF redirecting traffic properly to ipfw dummynet.
Also of interest on the console I am getting notices saying end of ipfw rules hit and denying packet, which is odd as on dmesg bootup, it shows as the default policy for ipfw set to allow. Which conflicts with the denying packets.
-
Ok a bit more information since I was doing this before going bed and getting tired.
As mentioned before all traffic was been blocked except on one occasion which I will mention in a moment.
I disabled ALTQ, enabled dummynet limiter, disabled the majority of my LAN firewall rules (Since they were to divert to different priority HSFC queues, and so the firewall was vastly simplified, the only LAN rules were pfblockerNG rules, rules to route specific ip's to VPN's and the default outbound LAN rule which I set to use the dummynet in/out pipes.
If I viewed the live limiter stats (the enhanced stats with the patch provided in thread), I always seen an active bucket seemingly processing continuous data, even tho there was only idle network activity, all internet connections which were not already established timed out.
However on one occasion this bucket didnt appear, and the connectivity was working, but as I started a speedtest, it all went to timing out again and this bucket showing continuous data was back.
In addition if I left it like this for a while then the console would start getting flooded with messages like "fq codel enqueue over limit" and maxidx warnings. If I left it further, the kernel panicked.
Running ipfw pipe flush immediately killed the console messages (and also prevented the panic).
Changing the default outbound LAN rule to not use the dummynet pipes immediately restored connectivity. However this did not stop that bucket flow of data, so that occurred even when no firewall rules were routing traffic to dummynet pipes.I think HFSC may have been causing me some issues, so what I have done for now is gone back to ALTQ and FAIRQ for the upstream, but there is no downstream ALTQ active so right now I have no downstream shaping.
I got no idea myself on how to proceed now so unless I get suggestions I wont be trying it again for a while as I wont get anywhere I think. I am curious tho if anyone here who is using it is running 2.4.2?
-
Previously when I last tried this on 2.4.0 dev it at least functioned.
Now trying again on 2.4.2 release it just wont work.
Following the instructions to the latter and even using my backed up config for this which previously functioned, results in all outbound connections timing out as if blocked.
If I set the in/out pipes to use the child queues, connections will work briefly but then timeout after a few seconds.
If I set the in/out pipes to none, everything works but of course the shaper isnt been used.Limiter info page shows the correct information as does ipfw pipe show and ipfw sched show. The issue seems to be PF redirecting traffic properly to ipfw dummynet.
Also of interest on the console I am getting notices saying end of ipfw rules hit and denying packet, which is odd as on dmesg bootup, it shows as the default policy for ipfw set to allow. Which conflicts with the denying packets.
I've got fq_codel working fine on the latest 2.4.2 release using two root limiters and four queues under each. The only algorithm parameters I've tweaked from default are the limit, interval, and target.
Can you show us the output of:
ipfw sched show
ipfw pipe show
ipfw queue showAlong with a screenshot how your limiters/queues are setup? That will help us debug things further.
Hope this helps.
-
Ok I wish I already stored that information, I will go back to this again probably on boxing day and post the information you requested then.
-
I don't have any problem with fq_codel also. I think it can be some package or enabled feature, like captive portal.