Playing with fq_codel in 2.4
-
Interval is meant to represent the RTT(ping) of most of your connections. Typically 100ms is ballpark correct for most users. If most of the services you're connecting to are more than 100ms away, then increase it.
-
Interval is meant to represent the RTT(ping) of most of your connections. Typically 100ms is ballpark correct for most users. If most of the services you're connecting to are more than 100ms away, then increase it.
So let's say I have 25 ms ping and most of the services I connect to are in the order or 30 ms, I can safely put an interval of 30 ms for both the download and upload queues?
And what about the target?
-
The target should be at least 1.5x the serialization delay of your MTU relative to your bandwidth. For example. A 1500mtu is 12,000 bits times 1.5 is 18,000 bits divided by 1Mbit/s is 18ms. In this case, your target should be at least 18, but not much higher. This does not scale linearly. If you have 10Mbit/s of bandwidth, that doesn't mean you want a 1.8ms target. Generally 5ms is good and is the default. The 5ms default works well from 10Mbit/s all the way to 10Gbit/s. But not to say it is the best for a given bandwidth.
This is how target and interval are used.
Target is used to determine if a packet has been in the queue for too long. Codel/fq_Codel timestamp when a packet gets enqueued and checks how long the packet has been in the queue when it gets dequeued. If the packet has been in the queue for too long and the queue is not in a drop mode, it will drop/discard the packet. This is where interval comes in. The packets behind that packet are probably as old or older than the packet that just got dropped. But even though those packets are older than the target, Codel will not drop them until at least an interval amount of time has passed. For the next interval of time, all packets will continue to get dequeued as normal.
If before the interval is reached, the queue sees a timestamp of less than the target, the queue will leave drop mode as the latency has come down. If by the next interval no packet is seen with a timestamp of less than the target, the queue will drop/discard another packet. Now it will reduce the interval by some non-linear scaling factor, like square-root or something in that ballpark. Rinse and repeat. Keep dequeuing packets until a packet is below the target or the interval is reached again. If the interval is reached, drop the current packet and reduce the interval.
The reason the interval is the RTT is because a sender cannot respond faster to a dropped packet than the RTT time. Codel doesn't want to just start dropping a large burst of packets to keep latency low. That would kill bandwidth. It just wants to drop a single packet that is statistically likely to be related to one of the few heavy flows that are clogging the pipe, then waiting an RTT amount of time to see if that fixed the issue. If latency is still high, drop another packet and become more aggressive by reducing the amount of time before it drops again.
This allows for a high burst of packets to move through the queue without dropping most of time, but at the same time quickly attacks any flows that attempt to fill up the queue and keep it full. This keeps the queue very short.
The only difference with fq_Codel is it has these queues per bucket, and a default of 1024 buckets. It hashes a packet, which causes all packets for a given flow to land in the same bucket, but flows are randomly distributed among all of the buckets. Coupled with a DDR algorithm that tries to fairly distribute time among the buckets, driven by the quantum value, fq_Codel tends to isolate heavy traffic from the lighter traffic, and even where a light flow shared a bucket with a heavy flow, the latency is kept low and the heavy flow statistically is more likely to have its packets dropped.
Cake uses "ways" to keep perfect isolation of multiple flows sharing a same bucket, but that's another discussion for something that isn't even complete yet.
-
playing with this again
root@PFSENSE ~ # ipfw pipe show 00001: 17.987 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 9.200 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active root@PFSENSE ~ # ipfw sched show 00001: 17.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 9.200 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 root@PFSENSE ~ # ipfw queue show q00001 50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
after routing on default lan outbound rule
root@PFSENSE ~ # ipfw pipe show 00001: 17.987 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 9.200 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active root@PFSENSE ~ # ipfw sched show 00001: 17.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 9.200 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 109248 9956265 27 3497 9 root@PFSENSE ~ # ipfw queue show q00001 50 sl. 0 flows (16 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
This rises rapidly even tho connection idle
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 109248 9956265 27 3497 9
console full of
fq_codel_enqueue over limit
and
fq_codel_enqueue maxidx = <random 3="" 400="" 525="" digits="" usually="" between="" and="">screenshot of relevant part of rule attaching to postFinally if it helps, connectivity doesnt die if the traffic is only light, but any type of bulk download, streaming, speedtest will make "everything" timeout until rules are reloaded or ipfw is flushed or outbound rule is configured to not route to dummynet pipes.
</random> -
playing with this again
root@PFSENSE ~ # ipfw pipe show 00001: 17.987 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 9.200 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active root@PFSENSE ~ # ipfw sched show 00001: 17.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 9.200 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 root@PFSENSE ~ # ipfw queue show q00001 50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
after routing on default lan outbound rule
root@PFSENSE ~ # ipfw pipe show 00001: 17.987 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 9.200 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active root@PFSENSE ~ # ipfw sched show 00001: 17.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 9.200 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 109248 9956265 27 3497 9 root@PFSENSE ~ # ipfw queue show q00001 50 sl. 0 flows (16 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (16 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
This rises rapidly even tho connection idle
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 109248 9956265 27 3497 9
console full of
fq_codel_enqueue over limit
and
fq_codel_enqueue maxidx = <random 3="" 400="" 525="" digits="" usually="" between="" and="">screenshot of relevant part of rule attaching to postFinally if it helps, connectivity doesnt die if the traffic is only light, but any type of bulk download, streaming, speedtest will make "everything" timeout until rules are reloaded or ipfw is flushed or outbound rule is configured to not route to dummynet pipes.</random>
Those errors usually indicate that the queue size (the limit parameter) is too small. However, since yours is already very large, something else must be misconfigured for the queue to fill up and run out of space. It almost seems to me that the queue is filling and not enough traffic is passing through it (for instance if the limiters were not properly configured or there is a rule blocking the traffic).
Can you please also show us a screenshot of your limiters and queue configuration (apologies if you already posted this before, but I couldn't find it)?
Thanks.
-
Shell Output - ipfw pipe show 00001: 265.576 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 275.576 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active
Shell Output - ipfw sched show 00001: 265.576 Mbit/s 0 ms burst 0 q00001 50 sl. 0 flows (256 buckets) sched 1 weight 1 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000 sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 1 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 58 3664 0 0 0 00002: 275.576 Mbit/s 0 ms burst 0 q00002 50 sl. 0 flows (256 buckets) sched 2 weight 1 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 2 0 ip 0.0.0.0/0 0.0.0.0/0 65 96980 0 0 0
q00001 50 sl. 0 flows (256 buckets) sched 1 weight 1 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000 q00002 50 sl. 0 flows (256 buckets) sched 2 weight 1 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
If we compare ipfw sched show output there is something missing on your side. I am using johnpoz method as it survive upgrades and don't need to mess with patching or editing anything on base system.
https://forum.pfsense.org/index.php?topic=126637.msg754199#msg754199 -
probably wont be until saturday, but I will post I tried an install of 2.4.0, restored the config, and it functions correctly, 100% same configuration. Back to 2.4.2 and problem comes back. (tested yesterday).
I can post limiter config now I guess as its in the GUI but the enable box is unticked.
I got 2 issues with what john posted.
1 - It only applies fq_codel at boot, it will get lost on a limiter reload.
2 - He posted some instructions that were not detailed, meaning I cannot be sure if I follow his setup I am doing it right.I can tell you that the outbound LAN rule is been hit by the counters that are displayed and also by the fact when I edit the rule to stop using the pipes traffic works again, it wouldnt have that impact if there was another rule above it intercepting the traffic. Not to mention I dont have any blocking outbound rules other than pfblockerNG used for dnsbl stuff.
What is missing on the ipfw sched show? I dont notice anything.
-
probably wont be until saturday, but I will post I tried an install of 2.4.0, restored the config, and it functions correctly, 100% same configuration. Back to 2.4.2 and problem comes back. (tested yesterday).
I can post limiter config now I guess as its in the GUI but the enable box is unticked.
I got 2 issues with what john posted.
1 - It only applies fq_codel at boot, it will get lost on a limiter reload.
2 - He posted some instructions that were not detailed, meaning I cannot be sure if I follow his setup I am doing it right.1. It applies every time when something causes reload of packages or at boot. Actually I don't understand why you need to reload limiter.
2. May be. It depends.You can just configure your limiters via GUI and then run command via GUI command line:
/sbin/ipfw sched 1 config pipe 1 type fq_codel target 7ms quantum 2000 flows 2048 && /sbin/ipfw sched 2 config pipe 2 type fq_codel target 7ms quantum 2000 flows 2048
Make sure that you have not messed up with traffic direction and masks.
Show your gui config, including LAN rule IN/OUT pipe and modded rules.limiter
What is missing on the ipfw sched show? I dont notice anything.
Mask is missing.
Actually I am on 2.4.3 and I am not sure is there something broken on 2.4.2
-
ok tonight I will unpatch pfsense so it doesnt use the /root/rules.limiter.
I will just hit apply in the limiter config to apply, and then add the command to boot script same as john, and reboot for it to apply.
Then I will check to see if the mask appears on ipfw sched show. (PF firewall rules will have no impact on that).
The masks have been checked I have lost count amount of times, they are no different to the guide posted by OP and what john has set. It is an interesting observation its missing from ipfw sched show, but thats not down to a misconfiguration in the GUI. Although I cannot rule out the patch been the culprit until I unpatch and retest which I will do tonight, but remember I said this is working fine on pfsense 2.4.0.
If you guys consider the OP's post wrong, then maybe a new thread should be made as I expect most people to follow the first post, not go several pages into something posted halfway thru :)
–edit--
I have done it just now running the command via the gui command as you suggested.
output of ipfw sched show
00001: 79.987 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 1 00002: 20.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 2
The gui config is no different to the screenshots I already posted, the only difference been the patch is disabled, and I ran the command provided to apply fq_codel.
attaching gui limiter config, I didnt post that before sorry, but the LAN rule is already posted. You can see is just cosmetic differences in the naming of the limiters, and the bandwidth limit.
I enabled logging of the outbound LAN rule and sure enough the logs verify the rule is correctly been hit.
Jan 25 08:07:51 LAN Default allow LAN to any rule (1513840726) 192.168.1.124:49240 80.249.103.8:443 TCP:S Jan 25 08:07:51 LAN Default allow LAN to any rule (1513840726) 192.168.1.124:49239 80.249.103.8:443 TCP:S Jan 25 08:07:51 LAN Default allow LAN to any rule (1513840726) 192.168.1.124:49238 80.249.103.8:443 TCP:S Jan 25 08:07:51 LAN Default allow LAN to any rule (1513840726) 192.168.1.124:49237 80.249.103.8:443 TCP:S Jan 25 08:07:51 LAN Default allow LAN to any rule (1513840726) 192.168.1.124:49236 80.249.103.8:443 TCP:S Jan 25 08:07:51 LAN Default allow LAN to any rule (1513840726) 192.168.1.124:49235 80.249.103.8:443 TCP:S Jan 25 08:07:50 LAN Default allow LAN to any rule (1513840726) 192.168.1.186:55112 129.70.132.34:123 UDP Jan 25 08:07:49 LAN Default allow LAN to any rule (1513840726) 192.168.1.186:45591 89.238.136.135:123 UDP Jan 25 08:07:49 LAN Default allow LAN to any rule (1513840726) 192.168.1.186:59559 194.1.151.226:123 UDP Jan 25 08:07:49 LAN Default allow LAN to any rule (1513840726) 192.168.1.186:43958 85.199.214.99:123 UDP
From where I sit, with the facts in front of me.
Configuration looks good.
Works properly on 2.4.0
I have tested using john's method.PfSense 2.4.2 is now closed kernel source (no public repo), so I cannot rule out any code changes they may have done breaking dummynet.
If I set the in/out pipe to none on the LAN outbound rule everything works again (albeit without the limiter processing the traffic). If I had an issue with the LAN rules not processing the right traffic then that wouldnt be happening.
I appreciate your help of course, and its a very nice catch to notice the mask is missing from my schedulers, even tho it is configured, and does show on the queues.
ipfw queue show q00001 50 sl. 0 flows (256 buckets) sched 1 weight 0 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 q00002 50 sl. 0 flows (256 buckets) sched 2 weight 0 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
I have just noticed another problem myself.
See in my ipfw queue show the 2 queues are q00001 and q00002
Yet in ipfw sched show the 2 queues are q65537 and q65538
In ipfw pipe show the 2 queues are q131072 and q131074
Now the OP has the same anomaly, but on john's post and your own post, you both dont, the queues on the ipfw sched show match the queues in ipfw queue show so q0001 and q00002
-
another quick update.
I copied ipfw.ko and dummynet.ko from 2.4.0 and it works properly.
I have q00001 and q00002 in ipfw sched show and the masks are present.
00001: 80.000 Mbit/s 0 ms burst 0 q00001 50 sl. 0 flows (256 buckets) sched 1 weight 1 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000 sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 1 00002: 20.000 Mbit/s 0 ms burst 0 q00002 50 sl. 0 flows (256 buckets) sched 2 weight 1 lmax 0 pri 0 droptail mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000 sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 2
So I plan to reinstall pfsense from a new clean install image to see if the modules are ok that way, as seems I got a module issue somewhere, thats exhibited itself on this 2.4.2 installation. The internet no longer dies now with the limiters activated.
-
now its working and I have had a period of time using it I can provide some feedback on its performance.
Affects on latency for uploads are better than ALTQ which matches my earlier experience.
Downloads seem similar performing to ALT+HFSC but the setup in terms of managing priorities etc, is greatly simplified for the similar result although I have done no recent steam testing. The setup here is no priorities just the one downstream pipe for all traffic. -
Affects on latency for uploads are better than ALTQ which matches my earlier experience.
How much better?
You were using CoDel with ALTQ, yeah?
-
I was using fairq+codel, it was doing a reasonable job but had perhaps about double the jitter I see now on thinkbroadband upload tests at same throughput.
-
I'm glad you got things working chrcoluk. I'm starting to wonder if the instructions in the OP might be incompatible with the latest version of pfSense and that is causing issues.
I posted these instructions in another thread - https://forum.pfsense.org/index.php?topic=142321.0
–---------------------------
Basic Instructions For Setting Up fq_codel:1) Setup limiters - at minimum you'll need to create two root limiters and then create one queue under reach root limiter. You can setup more queues if it's required/desired. This is also where you set your bandwidth limits.
2) Apply the queues to the necessary firewall rules (e.g. to the LAN rule(s) that allows your outbound traffic in the "In/Out Pipe" section).
3) Enable fq_codel via the command line (can SSH into the firewall for that): Issue the following command:ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel
To validate that the command has indeed enabled fq_codel, issue this command:
ipfw sched show
If all looks good (you should now see fq_codel listed in the output), go ahead and test to see if performance is acceptable. If not, you can make changes by tweaking the algorithm's default parameters and/or your bandwidth limits. For instance, you may have to increase the algorithm's target latency if you have a connection with slower upload speed, or decrease your bandwidth limits if e.g. your upload/download speeds aren't stable.
4) To make sure that your settings stick between reboots, install the ShellCmd add on package in pfSense. Once you have done that make sure you add the command in step 3 to ShellCmd.
Some additional notes:
1) On setting up limiters: See post #121 in the thread: https://forum.pfsense.org/index.php?topic=126637.msg754199#msg754199
2) On tweaking algorithm parameters: See post #198 (and following) in the thread: https://forum.pfsense.org/index.php?topic=126637.msg769665#msg769665–-------------------------
These instructions have worked for me through 2.4.2-RELEASE-p1. Would it make sense to start a new thread with them?
Thanks in advance.
-
I agree with a new thread, I would also put findings by others as notes as well such as tuning the quantum size, I think someone mentioned using 300 is good for prioritising small packets?
This thread has a lot of pages, so its easy to miss stuff.
For reference I am still using the newer method, I have left pfsense unpatched and added the command in shellcmd so it applies every reboot. (still got the patch applied that enhances limiter diagnostics page on gui)
Doing a filter reload doesnt seem to break it so its fine for me.
It is possible I somehow mixed up the modules or something as I had added extra modules to add functionality, so doing a clean install of 2.4.2 to ensure both modules are synced will be done by me at some point, as right now I am still using the 2.4.0 modules. Or I might install 2.4.2 elsewhere and just copy the modules from that across.
-
I am getting ready to make the plunge on 2.4.2_p1. I am been using the wizard with Multiple LAN/WAN (I currently have 10 VLANs, 1 WAN and three VPN_WAN connections. I do so enjoy and envy those people that have 100/50 and 50/25 connections, but I have been curse with using AT&T and my DSL is 18/2, so I need to squeeze to most optimal setup.
I have been reading, but was wondering if some has possibly started a new thread so that I can be up to date on all the tricks to make this work smoothly?
-
Quick question. If I run the command ipfw sched show I see fq_codel. If I look in the gui at diagnostics limiter, I see fifo. Is that what I should see? The limiters are working fine but I wonder if fq_codel is really applied to the stream or is what I see just the result of setting limiters.
Edit: Just to see what happens, I left everything in place but removed the entry from shellcmd and restarted. This restored the system settings related to limiters. The result on DSLReports is A+ across the board. Limiter info in the GUI populates info now about the limiters. Maybe I missed something in this process but this is much better for me. I notice as shown in the screen shots and as mentioned here in other places that I use schedules 1 and 2 in my script but the system limiters do not. My DSLReports ratings prior to this change were D and F. I have a feeling its something I'm missing but for the moment I am getting the result I was after.
One caveat about this current config. I have gig symmetrical that will do 920 each way without limiters. With my current config it tests at @750 which is fine.
![Screen Shot 2018-03-02 at 3.31.46 PM.png](/public/imported_attachments/1/Screen Shot 2018-03-02 at 3.31.46 PM.png)
![Screen Shot 2018-03-02 at 3.31.46 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2018-03-02 at 3.31.46 PM.png_thumb)
![Screen Shot 2018-03-02 at 3.31.33 PM.png](/public/imported_attachments/1/Screen Shot 2018-03-02 at 3.31.33 PM.png)
![Screen Shot 2018-03-02 at 3.31.33 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2018-03-02 at 3.31.33 PM.png_thumb) -
I figured I'd share my config as I spent some time today with little to do at work on converting over to fq_codel setup for my pfSense setup. I have a 1Gig Verizon FIOS line coming in which is rated at 940 down and 880 up. I have a pretty straight forward setup going as I only split up into 3 queues and basically high prioritize my games and VOIP to high and lower all my p2p / plex download traffic to everything else.
I have the Shell Command to create the proper queue setup:
https://i.imgur.com/k08PJQZ.png
I have an upload and download limiters with 3 buckets at 880Mb/s and 940Mb/s respectively. In those queues, I have a high, default and low at a 75, 25, 5 weight.
https://i.imgur.com/6JZTEXd.png
https://i.imgur.com/6cDzTe5.png
Source and Destination in the config gets a little squirrelly for me as I want to make sure I have a clear break in my upload and download traffic so I didn't select either there as I handle that in the rules config.
I have a series of match floating rules with logging setup so I can validate. All shaping is selected on my WAN interface:
https://i.imgur.com/HeMy45B.png
My rules examples are a bit big so I linked them a little different:
Default queue
http://i.imgur.com/CQDQGcf.pnghttp://i.imgur.com/CQDQGcf.png
Low priority rule
http://i.imgur.com/MDuvFFe.pngFor floating rules and pipes, the in and out are switched as noted in the help text. I did check that in my speed test as I can see the speeds are exactly what I expected. I noticed much better performance when compared with the other schedules stock in pfSense.
My speedtest results made me happy:
Edit 1: I seem to have a slight problem with matching my internal (Private) IPs properly. I've gotta do a little more testing to figure out why they aren't matching. My WAN rules work perfect though so it's a start. I just want to make sure I can get internal stuff matched as well.
From what I remember about limiters I thought that the Mask need to be set depending if the traffic is in bound or outbound.
I have my Upload mask set to "source address" and download to "destination address" for the limiter and each queue nested under.
Is this correct? Seems it works and I see traffic passing. I didn't with it set to "none"
-
I have some "issues" with the download queue. So i'd like to tell you what i have done so far.
I am on PfSense 2.4.2_1 and i have a symetrical 1000Mbit line and DSL reports
image before is attached. And My dsl report looks like this, as expected (image_1…)- Creating Limiters (screenshots attached for the upload Part, for the download part its the same but with a different name)
-
Upload (limited to 900Mbit)
-
highUp 75
-
defaultUp 25
-
lowUp 5
-
Download (limited to 900Mbit)
-
HighDown
-
defaultDown
-
lowDown
-
Creating Floating rules Rules
I created in total 6 Floating rules but only going to show the default ones in the screenshots
the other ones are basically clones anyway -
Installing the shellcmd package and adding
ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel -
horrible results, something is not working right on the download side, dunno what it is :D
Also an imigur album to just take a look at all the screenshots. https://imgur.com/a/bkIuA
![05_rule setup.JPG](/public/imported_attachments/1/05_rule setup.JPG)
![05_rule setup.JPG_thumb](/public/imported_attachments/1/05_rule setup.JPG_thumb)
![10_horrible_download results.JPG](/public/imported_attachments/1/10_horrible_download results.JPG)
![10_horrible_download results.JPG_thumb](/public/imported_attachments/1/10_horrible_download results.JPG_thumb)
-
I'm getting the feeling that "ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel" doesn't mean the same thing when you have multiple *queues. I find it interesting that your "ipfw sched show" says something like
Shed1 weight 75 fq_codel
child flowsets: 3 2 1Shed2 weight 25 fq_codel
child flowsets: 6 5 4Why do your two sched claim to have different weights if they're unrelated? Start small. Do a single queue per direction, the work your way to 3 each.
*I use the term "queue" in the general concept, not the technical context of the ipfw command