Playing with fq_codel in 2.4
-
@robnitro Take a look at the following guide as it should explain the issue you are witnessing and show how to workaround it - hint floating rule #1.
https://forum.netgate.com/post/807490
-
Thanks, I did add that rule for both WAN and LAN.
The only difference is that my setup has the rules based on LAN, because I have FIOS. The cable boxes video on demand VOD, can watch video above my 50mbit limit, they arent part of my QOS (by floating rule for the cable boxes IPs in an alias) because if they were, I would lose my 50mbit data max. Example: HD stream 16 mbit, I still can get 50mbit data. So the boxes are outside codel and/or HSFC.
When I have limiters by WAN, using client IP's in floating rules to not be a part of the limiter doesn't work most of the time. I think it's because the traffic is technically coming into the router WAN ip address and leaving from the WAN ip address... not the local address which only exists on LAN communication.
-
I have two ISP (10Mbps + 15Mbps symmetric) that currently config load balance in pfsense. I did follow your config for fq_codel on both WANs. I think both WANs are working as they are limiting their Download and Upload speed.
But later I found a problem, when I tried to upload a file to google drive, sometimes it doesn't upload at full speed (10KBps flat), sometimes it does. I'm not quite sure if is related to fq_codel, but when i tried to disable the floating rules that related to fq_codel, it always upload at full speed of my bandwidth.
here's my config;
I hope you can help me with this problem.
Findings that may help:
- When floating rules is enabled, load balancing on download speed is working but on upload is not working. Also all upload traffic always stick on default gateway even load balancing is enabled.
-
I just tried with the floating rules on WAN instead of LAN, omitting the cable boxes (STBs) and my desktop. So, it seems that if a rule is on WAN, you cannot use source or destination IP to omit a client from the queue.
Just to remind, on fios, you can have a 50/50 line but the cable boxes video on demand do not count towards the limit. Example: No limiters: watch a 16 mbit stream on cable box, speed tests will give 50 mbit. But put 50mbit limiter for all- 16 mbit stream, desktop will get 34 mbit.Only LAN based rules seem to work for me to exclude the cable boxes from the limiter.
I also have separate limiters for guest wifi clients that give them 20mbit, and this also has to be done on LAN floating rules, unfortunately.
I wonder if there could be a tiered form of queues that feed into eachother.
Example: main fqc queue 50, guests under that with a simple limiter of 20- instead of needing to have 2 separate queues of 50 and 20 (which traffic is not joined together) -
Can you use the fq_codel in/out limiters in conjunction with the standard traffic shaping?
-
You could, but not on the same interface. WAN with shaper and LAN with limiter would work I think for example.
-
I'm using shapers with the limiters on LAN. Before fqcodel, I was using plain limiters to keep guests on Wi-Fi from hogging connection with their apple updates.
With fq codel, i kept the same limiter speeds just changed the scheduler. I also added a fq codel limiter for those that aren't throttled and excluded the cable boxes as they aren't included in the isp speed limit (fios)
Unfortunately, I cannot use a single fq codel for everyone this way, but it doesn't seem to affect buffer bloat.
Limiters I have:
Schedulers: 00001: 27.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 80ms quantum 300 limit 10240 flows 1024 NoECN Children flowsets: 1 00002: 28.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 80ms quantum 300 limit 10240 flows 1024 NoECN Children flowsets: 2 00004: 30.000 Mbit/s 0 ms burst 0 q65540 50 sl. 0 flows (1 buckets) sched 4 weight 0 lmax 0 pri 0 droptail sched 4 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 80ms quantum 300 limit 10240 flows 1024 NoECN Children flowsets: 4 00005: 47.000 Mbit/s 0 ms burst 0 q65541 50 sl. 0 flows (1 buckets) sched 5 weight 0 lmax 0 pri 0 droptail sched 5 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 80ms quantum 300 limit 10240 flows 1024 NoECN Children flowsets: 5 00006: 48.000 Mbit/s 0 ms burst 0 q65542 50 sl. 0 flows (1 buckets) sched 6 weight 0 lmax 0 pri 0 droptail sched 6 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 80ms quantum 300 limit 10240 flows 1024 NoECN Children flowsets: 6 00007: 29.000 Mbit/s 0 ms burst 0 q65543 50 sl. 0 flows (1 buckets) sched 7 weight 0 lmax 0 pri 0 droptail sched 7 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 80ms quantum 300 limit 10240 flows 1024 NoECN Children flowsets: 3 Queues: q00001 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail q00002 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail q00003 50 sl. 0 flows (1 buckets) sched 7 weight 0 lmax 0 pri 0 droptail q00004 50 sl. 0 flows (1 buckets) sched 4 weight 0 lmax 0 pri 0 droptail q00005 50 sl. 0 flows (1 buckets) sched 5 weight 0 lmax 0 pri 0 droptail q00006 50 sl. 0 flows (1 buckets) sched 6 weight 0 lmax 0 pri 0 droptail
HSFC traffic shaping limiters
QUEUE BW SCH PRI PKTS BYTES DROP_P DROP_B QLEN BORRO SUSPE P/S B/S root_em0 64M hfsc 0 0 0 0 0 0 0 0 qInternetOUT 48M hfsc 0 0 0 0 0 0 0 qACK 8640K hfsc 12043K 703M 0 0 0 3 215 qP2P 960K hfsc 42429K 45636M 0 0 0 64 83005 qVoIP 512K hfsc 0 0 0 0 0 0 0 qGames 9600K hfsc 3576K 592M 0 0 0 0 0 qOthersHigh 9600K hfsc 2463K 665M 0 0 0 4 811 qOthersLow 2400K hfsc 0 0 0 0 0 0 0 qDefault 7200K hfsc 1815K 1170M 0 0 0 8 975 qNoLimiterSTB 4000K hfsc 0 0 0 0 0 0 0 qLink 640K hfsc 0 0 0 0 0 0 0 root_em1 490M hfsc 0 0 0 0 0 0 0 0 qLink 98M hfsc 0 0 0 0 0 0 0 qInternetIN 47M hfsc 0 0 0 0 0 0 0 qACK 8460K hfsc 838180 106M 0 0 0 6 374 qP2P 940K hfsc 27924K 14986M 0 0 0 24 2656 qVoIP 512K hfsc 0 0 0 0 0 0 0 qGames 9400K hfsc 1334 581495 0 0 0 0 0 qOthersHigh 9400K hfsc 8497K 5911M 0 0 0 4 618 qOthersLow 2350K hfsc 5315K 7804M 0 0 0 0 0 qDefault 7050K hfsc 18785K 17996M 0 0 0 32 6931 qNoLimiterSTB 320M hfsc 55549 78583K 0 0 0 0 0
-
@uptownvagrant Thanks for the guide, it is working perfectly for me on my Cable WAN (very bad bufferbloat). I'm trying to use it for my VDSL WAN as well but running into an odd issue. if I have the out floating rule enabled then my upload speed is exceptionally slow, no matter how the limiter is configured. If I disable the out floating rule and leave only the in one then upload speed is fine (but bufferbloat is back). The in rule works fine and gets rid of the download bufferbloat.
http://www.dslreports.com/speedtest/45277068 - this shows it with the out floating rule enabled
http://www.dslreports.com/speedtest/45277501 - and this is it with it disabled
Any ideas?
TIA
-
@csutcliff Hmm, let's start with this. Can you post the following output from the configuration that's not working with the VDSL circuit?
- Diagnostics / Limiter Info
- Diagnostics / Edit file - /tmp/rules.limiter
- Add something unique to the description of your floating rules, like the word "FQ-CoDel", and then go to Diagnostics / Command Prompt and execute
pfctl -vvsr | grep "FQ-CoDel"
You should get something like the following:
@59(1545172581) match in on igb0 inet all label "USER_RULE: FQ-CoDel WAN-IN" dnqueue(1, 2) @60(1545172613) match out on igb0 inet all label "USER_RULE: FQ-CoDel WAN-OUT" dnqueue(2, 1)
-
Is this normal? our dnqueue is opposite, I follow your guide.
@109(1546845642) match in on re0 inet all label "USER_RULE: ETPI-In FQ-CoDel queue" dnqueue(2, 1) @110(1546845723) match out on re0 inet all label "USER_RULE: ETPI-Out FQ-CoDel queue" dnqueue(1, 2)
Also hope you can help me with my problem, Implementing your configuration on Dual-WAN (Load Balance ) setup on pfSense.
-
Thanks for the reply, here is my limiter info. The 382.375/20.910 Mbit/s ones are my cable WAN, the 62.000/17.500Mbit/s is for my VDSL. First two queues are cable, last two are VDSL.
Limiters: 00001: 382.375 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 20.910 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active 00003: 62.000 Mbit/s 0 ms burst 0 q131075 50 sl. 0 flows (1 buckets) sched 65539 weight 0 lmax 0 pri 0 droptail sched 65539 type FIFO flags 0x0 0 buckets 0 active 00004: 17.500 Mbit/s 0 ms burst 0 q131076 50 sl. 0 flows (1 buckets) sched 65540 weight 0 lmax 0 pri 0 droptail sched 65540 type FIFO flags 0x0 0 buckets 0 active Schedulers: 00001: 382.375 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 20480 ECN Children flowsets: 1 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 7 470 0 0 0 00002: 20.910 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 20480 ECN Children flowsets: 2 0 ip 0.0.0.0/0 0.0.0.0/0 4 218 0 0 0 00003: 62.000 Mbit/s 0 ms burst 0 q65539 50 sl. 0 flows (1 buckets) sched 3 weight 0 lmax 0 pri 0 droptail sched 3 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 20480 ECN Children flowsets: 3 00004: 17.500 Mbit/s 0 ms burst 0 q00004 50 sl. 0 flows (1 buckets) sched 4 weight 1 lmax 0 pri 0 droptail sched 4 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 20480 ECN Children flowsets: 4 Queues: q00001 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail q00002 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail q00003 50 sl. 0 flows (1 buckets) sched 3 weight 0 lmax 0 pri 0 droptail q00004 50 sl. 0 flows (1 buckets) sched 4 weight 1 lmax 0 pri 0 droptail
/tmp/rules.limiter
pipe 1 config bw 382375000b droptail sched 1 config pipe 1 type fq_codel target 5ms interval 100ms quantum 300 limit 10240 flows 20480 ecn queue 1 config pipe 1 droptail pipe 2 config bw 20909500b droptail sched 2 config pipe 2 type fq_codel target 5ms interval 100ms quantum 300 limit 10240 flows 20480 ecn queue 2 config pipe 2 droptail pipe 3 config bw 62000Kb droptail sched 3 config pipe 3 type fq_codel target 5ms interval 100ms quantum 300 limit 10240 flows 20480 ecn queue 3 config pipe 3 droptail pipe 4 config bw 17500Kb droptail sched 4 config pipe 4 type fq_codel target 5ms interval 100ms quantum 300 limit 10240 flows 20480 ecn queue 4 config pipe 4 droptail
pfctl -vvsr | grep "FQ-CoDel"
@66(1548714795) match in on igb0 inet all label "USER_RULE: VIRGIN_WAN_DL FQ-CoDel queue" dnqueue(1, 2) @67(1548714885) match out on igb0 inet all label "USER_RULE: VIRGIN_WAN_UL FQ-CoDel queue" dnqueue(2, 1) @71(1548715528) match in on pppoe0 inet all label "USER_RULE: AAISP_WAN_DL FQ-CoDel queue" dnqueue(3, 4) @72(1548722103) match out on pppoe0 inet all label "USER_RULE: AAISP_WAN_UL FQ-CoDel queue" dnqueue(4, 3)
-
@csutcliff Hmm, I don't have a VDSL2 PPPoE connection to test with but I'm wondering if the following thread may point you down the correct path.
https://lists.freebsd.org/pipermail/freebsd-ipfw/2016-April/006161.html
Also, why do you have your VDSL limiters set above what you are seeing in the speed test results when you don't have them enabled? Can you try setting both limiters to 80% or 85% of your speed test result and then work up from there - keeping an eye on buffering each subsequent test?
-
@chrcoluk just a quick note after some additional testing. I believe I ran into what you were seeing when the number of active states passing through the limiter queues far exceeded the FQ-CoDel flows value and flows were not being separated any longer. Increasing the FQ-CoDel flows value allowed flow separation to be maintained and interactive flow packets were not being sent to sub-queues where non interactive flow packets were also being sent.
-
@uptownvagrant I'll have a read, thanks.
Regarding the limiters settings, I obviously haven't been able to tune the upstream setting yet (I've got it at just under 95% of the sync speed) but I have tried it set at only 10Mbit/s and it makes no difference.
For the dowstream, my sync is over 65Mbit/s and I've settled on 62Mbit/s after trial an error to see what value eliminated bufferbloat whilst keeping as much bandwidth available as possible. The reason you don't see 65+ on the speed test without the limiters is because my ISP allows me to set the sending rate on the download which I have set to 95% of sync, this is meant to help with bufferbloat and improve VOIP etc because the link is never 100% pegged and in theory stops the wholesale ISP (their supplier) buffers getting involved.
Edit: had a read of the thread you linked. I don't think any of that applies here as I'm using plain PPPoE not PPPoA, I have a full 1500 MTU to the internet thanks to baby jumbo frames (1508 to the modem to account for the PPPoE header) and my upload speed is not restrictive like in the example, in fact it's only a couple of Mbit/s shy of the cable connection that works fine.
-
@csutcliff If you set quantum to 1508 and enable the limiter what is the result? Also, does FQ-CoDel perform as expected over the VDSL2 circuit without dual WAN in the mix?
-
@uptownvagrant no change in behaviour with 1508 quantum. I'll test just the VDSL limiters enabled later this evening.
-
Limit could be lowered to help out:
https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/#tuning-fq-codelWhen running it at 1GigE and lower, today it helps to change a few parameters given limitations in today’s Linux implementation and underlying device drivers. The default packet limit of 10000 packets is crazy in any other scenario. It is sane to reduce this to a 1000, or less, on anything running at gigE or below. The over-large packet limit leads to bad results during slow start on some benchmarks. Note that, unlike txqueuelen, CoDel derived algorithms can and DO take advantage of larger queues, so reducing it to, say, 100, impacts new flow start, and a variety of other things. We tend to use ranges of 800-1200 in our testing, and at 10Mbit, currently 600.
-
@uptownvagrant Sorry for the delay, just got a chance to test it without the dual wan. No change in the result.
-
@robnitro Thanks for the suggestion, It has seemingly improved my cable connection responsiveness slightly but doesn't make a difference to the upload problem on vdsl.
-
I wanted to give a brief update - I actually am here about the issues with traceroutes showing the default destination addresses for every hop, caused by the Floating Firewall rule.
I fixed this on my side, initially, by only having the rule shape TCP/UDP - but some other clients do traceroute via UDP so I had this same issue. Instead, there's a firewall option to limit/scope it down to particular TCP flags - you can set the ones you wish, but I ultimately ended up targeting it to "Any" protocol, and "Any flags" - this fixed it for me. :)
EDIT: My pfSense randomly crashed... and... it rebooted... and... somehow now it isn't working.
Sigh. Guess I'm back to applying this to a non-floating rule. :( -
Hi, what rule did you use for non floating?
When I do tracert from windows, it shows only 2 hops. Router and then the destination.From openwrt dumb access point, all hops show fine like From the router I get all of the hops. Floating rule instant match allow any IPV4 ICMP trace - and another rule icmp any at top of list. So that means it should skip using the codel limiters right? I just dont understand why my desktop client gets bad tracert info?
-
i actually just reverted to using a simple TCP/UDP floating rule, setting up the non-floating was a huge pain (I had it working before) but can't remember how.
I'm going to be creating exemptions / rules that exempt specific clients in the future - now that it's working fine (on my primary use case, Windows!)
Here is the exact rule I have (that is, somehow, letting pfSense UDP traceroute work - I am tired of messing with it / don't care). Anything that is missing means it's default.
Action: Pass
Interface: WAN
Direction: Out
Addr Family: IPv4
Protocol: TCP/UDP
Gateway: WAN_DHCP (my gateway selected)
In/Out Pipe: WANUpQ (my name) & WANDownQ (my name)Here are the rules I have:
Firewall Rules > Traffic Shaper > "Limiters""CODEL_QMDown"
Limiter: Enabled
Bandwidth: 320 Mbit
Queue Management Algorithm: CoDel
Scheduler: FQ_CODELSubqueue, "WANDownQ"
Queue algorithm: CoDel"CODEL_QMUp"
Limiter: Enabled
Bandwidth: 340Mbit (yes, I have higher upload than download)
Queue management algorithm: CoDel
Scheduler: FQ_CODELSubqueue, "WANUpQ"
Queue algorithm: CoDel
I know these are basically defaults - I'm not sure why these work? I simply re-created everything, and now everything works great... it's quite odd. I'm curious if there is an issue with pfSense and having the rules edited, or large changes made to the queues (I hit Save/Apply, particularly, after every single change / creation - no going back and editing for me!)
Quite odd. I've used FQ_CoDel in other devices/implementations and have never run into these nuances. It's working great, now, though - but the traceroute/etc was bothering me horribly. With these rules (no floating rule for 'any') it seems to be working great... for... whatever reason.
-
Hi all!
Configured limiters and now my traceroute always shows resolved IP address instead of actual hops.
I`ve read here this is common but just to be sure, is this supposed to be like this or is it a bug?Thanks!
-
Hi guys,
I’m also having an issue with my upload speed... So I have a 400Mbit down / 40 Mbit up connection, and I've followed the guide here to enable the limiters, filling in the 400 and 40 as my down and up speeds.
However when I do speedtests, while my bufferbloat is now gone... I no can no longer reach my maximum speeds.
Down I may get around 360Mbit max which I can live with butup is just pathetic, maybe between 2-8Mbit at most....Is this known? Anything I can do to improve this? I played with the Queue Length value which helped a bit, but in the end didn't really have too much of an impact....
Limiters: 00001: 400.000 Mbit/s 0 ms burst 0 q131073 10000 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms ECN sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 40.000 Mbit/s 0 ms burst 0 q131074 1000 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms ECN sched 65538 type FIFO flags 0x0 0 buckets 0 active Schedulers: 00001: 400.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 1 00002: 40.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ECN Children flowsets: 2 Queues: q00001 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms ECN q00002 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms ECN
-
You are supposed to set the limiters lower than your actual connection speed. I set mine to 95% of tested(not ISP advertised) speed.
Also please use the guide posted a little while earlier. The one from the hangouts session is outdated and does not cover a bug workaround.
https://forum.netgate.com/post/807490
P.S - I am also on Ziggo NL 240/24 connection.
-
@xciter327 Even at 95% of 400/40 which is 380/38, the upload speed is nowhere near that. The download at least comes in the vicinity.
-
@Veldkornet I don't suppose it's a PPPoE connection? I had a similar problem.
-
Hi how did you resolve pppoe problem?
Thanks! -
@maverick_slo I didn't. I am only able to use the Download shaper for my PPPoE WAN. It works perfectly on my other WAN which is DHCP.
-
@csutcliff Nope, mine is just a DHCPv4 connection
-
@Veldkornet said in Playing with fq_codel in 2.4:
@xciter327 Even at 95% of 400/40 which is 380/38, the upload speed is nowhere near that. The download at least comes in the vicinity.
Hm. That should not be. Did You follow the guide I linked? Also what hardware is the pfsense running on? Limiters add extra CPU usage. Take a look with "htop -d 1" (need to install it first "pkg install htop") and see if You are peaking the CPU while doing the test.
-
@xciter327 ah! I didn't see the guide. I've now re-created everything as per the guide... although it didn't change the results...
I have a PCEngines APU2. I just had a look in "top" while doing the tests. Didn't really see anything climb very high at all. Even then, if it can handle the download, it should be able to manage the upload which is 10% of the download.
If I watch the test, the upload starts strong and climbs to around 20Mbit quickly, for 2 seconds or so, but then drops down to around 4/5 for the remainder of the test where it eventually finishes.
-
Could You perhaps post some picture of your firewall rules and limiter config? I am shaping my Ziggo connection on a Zotac NUC, which theoretically should be less powerful than the APU2. Also make sure You clear the states/reset the firewall when applying the limiters. Keep an eye in the system log for any log messages when You apply the limiters as well.
-
I can post screenshots instead of the below.. but it will take lots of space. I just checked everything though, and except for the speed limits, to me it looks the same as in the post. Might as well just be a copy paste. Looking at the floating rule screenshot though, I see that the WAN-In FQ-CoDel queue is pretty small considering all the tests I was doing... normal?
Also, see how the upload just dies off:
FQ_CODEL_OUT
Name: FQ_CODEL_OUT Bandwidth: 38 Mbit/s Mask: None Queue Management Algorithm: Tail Drop Scheduler: FQ_CODEL target: 5 interval: 100 quantum: 300 limit: 10240 flows: 20480
fq_codel_out_q
Name: fq_codel_out_q Mask: None Queue Management Algorithm: Tail Drop
FQ_CODEL_IN
Name: FQ_CODEL_IN Bandwidth: 380 Mbit/s Mask: None Queue Management Algorithm: Tail Drop Scheduler: FQ_CODEL target: 5 interval: 100 quantum: 300 limit: 10240 flows: 20480
fq_codel_in_q
Name: fq_codel_in_q Mask: None Queue Management Algorithm: Tail Drop
Firewall Rules - Floating:
policy routing traceroute workaround
Action: Pass Quick: Tick Apply the action immediately on match. Interface: WAN Direction: out Address Family: IPv4 Protocol: ICMP ICMP subtypes: Traceroute Source: any Destination: any Description: policy routing traceroute workaround
limiter drop echo-reply under load workaround
Action: Pass Quick: Tick Apply the action immediately on match. Interface: WAN Direction: any Address Family: IPv4 Protocol: ICMP ICMP subtypes: Echo reply, Echo Request Source: any Destination: any Description: limiter drop echo-reply under load workaround
WAN-In FQ-CoDel queue
Action: Match Interface: WAN Direction: in Address Family: IPv4 Protocol: Any Source: any Destination: any Description: WAN-In FQ-CoDel queue Gateway: Default In / Out pipe: fq_codel_in_q / fq_codel_out_q
WAN-Out FQ-CoDel queue
Action: Match Interface: WAN Direction: out Address Family: IPv4 Protocol: Any Source: any Destination: any Description: WAN-Out FQ-CoDel queue Gateway: WAN_DHCP In / Out pipe: fq_codel_out_q / fq_codel_in_q
-
Looks good to me. Mine at home is the same, with lower speeds tough. When You do the dslreports test, You can open up htop(prefer it because it's easier to deal with multiple cores) in one window and "ipfw sched show" in another to see if the limiters are actually matching traffic. Anything else on this box(like squid or snort)?
-
@xciter327 said in Playing with fq_codel in 2.4:
Looks good to me. Mine at home is the same, with lower speeds tough. When You do the dslreports test, You can open up htop(prefer it because it's easier to deal with multiple cores) in one window and "ipfw sched show" in another to see if the limiters are actually matching traffic. Anything else on this box(like squid or snort)?
Well I have Suricata, no squid. Although I turned Suricata off and it made no difference.
Download maxes the CPU, but upload doesn't seem to do much...
Download:00001: 38.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 20480 NoECN Children flowsets: 1 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 2478 112953 488 26508 0 00002: 380.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 20480 NoECN Children flowsets: 2 0 ip 0.0.0.0/0 0.0.0.0/0 33141 49232048 193 287300 12
Upload:00001: 38.000 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 20480 NoECN Children flowsets: 1 BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 31 45860 0 0 0 00002: 380.000 Mbit/s 0 ms burst 0 q65538 50 sl. 0 flows (1 buckets) sched 2 weight 0 lmax 0 pri 0 droptail sched 2 type FQ_CODEL flags 0x0 0 buckets 1 active FQ_CODEL target 5ms interval 100ms quantum 300 limit 10240 flows 20480 NoECN Children flowsets: 2 0 ip 0.0.0.0/0 0.0.0.0/0 16 664 0 0 0
-
Hm, I don't think You will have a great experience with all those thing's You've loaded on that little box. I read somewhere the APU2 is good for ~400Mpbs without Suricata/other heavy software.
I would try disabling all add-on(haProxy, Openvpn, pfblocker, Suricata, snmp, fancy unbound settings, tftp-server etc.) and try vanilla pfsense with just the limiters(via floating rules) and a simple "Allow all" on the LAN side. If your CPU is peaked(like You have on the download test), then better run without limiters.
-
Oh? The CPU load on it is almost non-existant usually. This is the first time I've seen it go so high now with the traffic shaping. Even so, I can understand that the load may be an issue for download and I'm okay with that.
It's the upload speed that's annoying me...... which is not doing much on the CPU side.
-
Well If I look at the picture, the upload is mostly pegged by Surricata. So that would be the first thing I disable. Also note that not all packages work normally with limiters. There used to be issues with Suricata/Squid and limiters, I don't know if it was ever fixed.
-
Okay, I disabled all of the packages, ran the test, upload good.
Then I enabled each one and tested until the upload speed decreased.So it seems I have discovered the culprit... I have an OpenVPN Client setup to PIA, this is what's causing the bad upload speeds. I have an alias with an IP range defined, and basically everything within that alias should go over the VPN. And then I have an interface & gateway which is bound to the OpenVPN client; with a Firewall rule which says anything to that alias should use the VPN gateway.
Is there anything special that I would need to do with my rules in this situation? The traffic itself isn’t going over the VPN.