Playing with fq_codel in 2.4
-
I have fq_codel working on my system without issue. I followed the screenshots from post #121.
Question:
If I apply the same lan / wan queues to the In / Out on my IPsec interface rule will bandwidth then be shared evenly between multiple IPsec clients?
I have several people that access server resources and it would be great if the bandwidth was shared evenly when everyone was trying to perform a get operation.
Thanks
-
to the guys saying they only had to enable in cli and "nothing" else.
You didnt do this step?
Start with a recent 2.4 snapshot. Create two root limiters, Download and Upload, and put 95% your maximum values in bandwidth. Create two queues under each, say LAN and WAN. For LAN, selection destination addresses for mask and source addresses for WAN. Modify the default outgoing firewall rule to use WAN under "in" pipe and LAN under "out" pipe.
Also the limiter is surviving all filter reload's?
-
to the guys saying they only had to enable in cli and "nothing" else.
You didnt do this step?
Start with a recent 2.4 snapshot. Create two root limiters, Download and Upload, and put 95% your maximum values in bandwidth. Create two queues under each, say LAN and WAN. For LAN, selection destination addresses for mask and source addresses for WAN. Modify the default outgoing firewall rule to use WAN under "in" pipe and LAN under "out" pipe.
Also the limiter is surviving all filter reload's?
Yes I did that step. When I say I only used the command line I mean I did not install a patch of any kind. I use Shellcmd package to run the command line again each time my system boots.
-
Part of the challenge is trying to figure out what gives better performance is your ISP and what may or may not be going on with your local network.
I've got a 1Gb FIOS line and a pretty 'quiet' neighborhood so I tend to get a very consistent speed for up and download when I'm testing. Since it's not a pure 'lab' scenario, you can't really be sure of the variables in your testing.
I've noticed:
- FQ_Codel seems to have a bit less overhead than HFCS/Codel
- If I get my upload and download speeds set properly, I can straight A+s on any buffer bloat tests
- If I have multiple things going on or something not configured correctly, I tend to get problems
- If you are using a straight up limiter and equally sharing bandwidth across all LAN connections for an example, you won't see your max upload/download as you have it shared equally. To that point, in OPNSense, you would configure a limiter and "weight" your FW rules to prioritize what you wanted.
My rules would look like something like:
Limiters: 10000: 940.000 Mbit/s 0 ms burst 0 q75536 50 sl. 0 flows (1 buckets) sched 10000 weight 0 lmax 0 pri 0 droptail sched 75536 type FIFO flags 0x0 0 buckets 0 active 10001: 880.000 Mbit/s 0 ms burst 0 q75537 50 sl. 0 flows (1 buckets) sched 10001 weight 0 lmax 0 pri 0 droptail sched 75537 type FIFO flags 0x0 0 buckets 0 active Queues: q10002 50 sl. 0 flows (1 buckets) sched 10001 weight 100 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms NoECN q10003 50 sl. 0 flows (1 buckets) sched 10001 weight 10 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms NoECN q10000 50 sl. 0 flows (1 buckets) sched 10000 weight 100 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms NoECN q10001 50 sl. 0 flows (1 buckets) sched 10000 weight 10 lmax 0 pri 0 AQM CoDel target 5ms interval 100ms NoECN
Which created some buckets and than weighted by my firewall rules.
I try to use the concept simple is better as I have very limited rules and only really lower my plex download traffic and prioritize my gaming traffic. Everything else just falls into the defaults.
-
To that point, in OPNSense, you would configure a limiter and "weight" your FW rules to prioritize what you wanted.
It works the same way in pfSense. I weight my guest Network to 10% of my bandwidth.
So if there is no lan traffic then guest can use all the bandwidth. When someone on lan starts using bandwidth then it will throttle guest all the way until they get down to 10% as necessary.
It's great, limits without wasting bandwidth. Of course you can set hard limits as well if you need to. -
To that point, in OPNSense, you would configure a limiter and "weight" your FW rules to prioritize what you wanted.
It works the same way in pfSense. I weight my guest Network to 10% of my bandwidth.
So if there is no lan traffic then guest can use all the bandwidth. When someone on lan starts using bandwidth then it will throttle guest all the way until they get down to 10% as necessary.
It's great, limits without wasting bandwidth. Of course you can set hard limits as well if you need to.Apologies as I don't mean to state the obvious so don't read into other than a statement, there is always traffic going on so if the plan is to share out across a LAN.
I always see some traffic going on which is specifically why I avoided equal sharing across my LAN and focused more on prioritizing hosts. All those Echos, ATVs and such are chatty :)
-
I don't think you're understanding.
Example:
On a 100/100 limiter.
LAN is weight 90, Guest is weight 10.LAN is unused, background traffic only (let's say ~2Kbps) - Guest has up to 99998Kbps of bandwidth available.
In short, guest is free to use as much of the available bandwidth as they want less whatever LAN is using (Guest can only ever take away 10% of the total available bandwidth from LAN. Likewise, LAN can only ever take away 90% of the total available from Guest).So, neither network will be limited at all until the pipe is full. The same principle is true for clients within each individual network.
Equal sharing does not mean that your bandwidth is automatically divided up between the number of clients on the network and each is given a hard limit.
I.e., 100Mbps limiter with 10 clients on the network automatically limits those clients to 10Mbps each all the time. That does not happen. That scenario would only ever happen if the pipe was full and ALL 10 clients were asking for >10Mbps simultaneously. The instant even one client backed off, that clients bandwidth would be distributed back out into the pool of available bandwidth. -
I don't think you're understanding.
Example:
On a 100/100 limiter.
LAN is weight 90, Guest is weight 10.LAN is unused, background traffic only (let's say ~2Kbps) - Guest has up to 99998Kbps of bandwidth available.
In short, guest is free to use as much of the available bandwidth as they want less whatever LAN is using (Guest can only ever take away 10% of the total available bandwidth from LAN. Likewise, LAN can only ever take away 90% of the total available from Guest).So, neither network will be limited at all until the pipe is full. The same principle is true for clients within each individual network.
I understood what you said. I used the term "equally sharing bandwidth across all LAN connections" in my post and you repeated my example of weighting, which I said I used.
-
Ok I see.
My point was that you made it sound like only opensense offered this feature, which is incorrect.
-
Ok I see.
My point was that you made it sound like only opensense offered this feature, which is incorrect.
Ah, ok as that wasn't my point. I just wanted to share that both the FQ-Codel and HFSC/Codel work well when configured right and my findings with quite a bit of testing was that FQ-Codel was more efficient but not by much and I had working results with both.
-
My bad, my bad! It was a really late couple of nights haha.
-
Hi guys,
Have been following the discussion on how to setup weights on the queues. Wanted to go through an example to make sure I understand correctly:
Let's assume I have 3 subnets (LAN1 - 3) and one guest network. I'd like to make sure that when under load, no LAN (or guest network) can hog all the bandwidth.
To set this up with limiters, I would:
Create an upload and download limiter and then create under each:
Download: Create 4 queues (one for each subnet with weight 30, and one for the guest network with weight 10)
Upload: Create 4 queues (one for each subnet with weight 30, and one for the guest network with weight 10).Assuming I had a 100/100 connection, this would ensure that:
With no load, any of the subnets including guest network could consume up to 100Mbit.
Assuming the connection is maxed out, this will ensure the that LAN1 - 3 are limited to 30Mbit each, and the guest network is limited 10Mbit each.In the situation where e.g. only LAN1 and LAN2 are trying to use all the bandwidth, how would it work (i.e. not traffic on LAN 3 and guest network)? Depending on on which subnet started using the bandwidth first, is either able to go up to 70Mbit as the other is guaranteed at least 30Mbit?
Thanks in advance for your help and explanation I really appreciate it.
-
You got it, in the lan 1 and 2 only scenario it would go to 50 Mbps for each since they are weighted equally.
The speeds will certainly have transient periods of assymetric throughput but will balance out.
-
You got it, in the lan 1 and 2 only scenario it would go to 50 Mbps for each since they are weighted equally.
The speeds will certainly have transient periods of assymetric throughput but will balance out.
Thanks! I configured everything as described and was able to test it out by running a speed test on the three LAN's concurrently. Was a nice to see speeds adjusting so that every LAN got its faire share as determined by the weights, yet if the other two LAN's are busy the third LAN could still use all the bandwidth.
Thanks again for the help - I think it's great how with proper traffic shaping one can really get the most out of a lower bandwidth connection, e.g. 50/50 or 75/75 will go a long way with proper shaping vs. spending extra $ to upgrade to more bandwidth to try to solve the problem.
-
The problem I'm having with fq_codel is shaping OpenVPN. It's not clear how best to apply fq_codel to OpenVPN for my setup.
There are two options here.-
Apply fq_codel to the WAN firewall rule for OpenVPN. This works well for site-to-site VPNs. If I send highly-compressable data, then the LZ4 compression works and I get a higher throughput. Uncompressable data is shaped normally and works well. This doesn't work well for a road-warrior connection. When the road-warrior accesses the Internet, that traffic is not handled by fq_codel. If it saturates the link then it's like not have fq_codel at all.
-
Apply fq_codel to the OpenVPN interface firewall rules. This breaks compression apparently as I couldn't get rates that exceeded the limiter speed.
With the old codelq applied to WAN, it didn't seem to matter what I did, as it would always do a pretty good job of keeping latency under control with/without OpenVPN, highly-compressable data, etc. fq_codel does a better job but having to apply it to every firewall rule is a bit of configuration tangle.
*Applying fq_codel to the WAN firewall rule for OpenVPN and sending highly-compressable data does introduce a lot of latency for me but still ok. It's much worse without fq_codel.
For reference:
Idle: 8ms, regular upstream saturation with fq_codel: 12-18ms, highly-compressable upstream saturation: 100ms, no fq_codel/codel upstream saturation: 1500ms. -
-
I have two queues created under the "download" limiter and they show up in Limiter Info, but when I create the schedule only one queue gets added…
Does the command "ipfw sched 1 config pipe 1 type fq_codel" need to be modified to tell it to include all queues? I'm trying to add a lower weight to the guest network.
edit: one other observation, I followed the screenshots from post 121 but I needed to set the mask to match my subnets or multiple clients were clashing and still causing buffer bloat.
![Screen Shot 2017-10-28 at 10.40.24 PM.png](/public/imported_attachments/1/Screen Shot 2017-10-28 at 10.40.24 PM.png)
![Screen Shot 2017-10-28 at 10.40.24 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2017-10-28 at 10.40.24 PM.png_thumb)
![Screen Shot 2017-10-28 at 10.40.08 PM.png](/public/imported_attachments/1/Screen Shot 2017-10-28 at 10.40.08 PM.png)
![Screen Shot 2017-10-28 at 10.40.08 PM.png_thumb](/public/imported_attachments/1/Screen Shot 2017-10-28 at 10.40.08 PM.png_thumb) -
I really don't get much difference. I was using OPNSense and fq_codel prior as it seemed to just work better for me.
With the new release, I changed back and just use HFSC queues with codel checked and some very basic rules to make sure my gaming traffic is first and my non important (downloads for media and other odd plex related download stuff) is limited. Works like a champ.
Only thing for me always comes back to making sure my upload and download limits match close to reality what I expect out of my link so I use 940 down and 880 on Verizon's Gigabit FIOS with 1000 queue. No drops and no bufferbloat that I've been able to make happen.
I have been using ALTQ FAIRQ + Codel Active Queue Management on my 150/150 link along with the queue set to 1024 in the child queue. My question is, does it make more sense to set the queue in the Codel child or the FAIRQ parent? Will I see a performance difference?
-
I have been using ALTQ FAIRQ + Codel Active Queue Management on my 150/150 link along with the queue set to 1024 in the child queue. My question is, does it make more sense to set the queue in the Codel child or the FAIRQ parent? Will I see a performance difference?
I've stuck with HFSC and codel on the child queues with a queue limit of 1000. Works perfect for me. I use a very simplistic setup as I only have a high, default, low queue and gaming/voip is high and my download/sync traffic is low. Everything is just defaults.
-
Hello,
I try to configure the weight on the queus with fq_codel.
I have a LAN, a WAN and a VPN (ipsec), I want to put a higher weight on the VPN (weight 90) and on my WAN a lower weight (weight 10).
I use floating rules on the LAN interface to match my flows.My limiters :
- Download : 1800kbit/s
- Download_LOW : weight 10 (mask /24 with "destination addresses")
- Download_HIGH : weight 90 (mask /24 with "destination addresses")
- Upload : 1800kbit/s
- Upload_LOW : weight de 10 (mask /24 with "source addresses")
- Upload_HIGH : weight de 90 (mask /24 with "source addresses")
Here is my /root/rules.limiter file:
pipe 1 config bw 1800Kb sched 1 config pipe 1 type fq_codel queue 1 config pipe 1 weight 10 mask dst-ip6 /128 dst-ip 0xffffff00 queue 2 config pipe 1 weight 90 mask dst-ip6 /128 dst-ip 0xffffff00 pipe 2 config bw 1800Kb sched 2 config pipe 2 type fq_codel queue 3 config pipe 2 weight 10 mask src-ip6 /128 src-ip 0xffffff00 queue 4 config pipe 2 weight 90 mask src-ip6 /128 src-ip 0xffffff00
It works without fq_codel but with fq_codel, limiters work but not weight on queues, my queues are not used.
When I do a "ipfw sched show", I do not understand why my queue 2 is present in the second limiter.
# ipfw sched show 00001: 1.800 Mbit/s 0 ms burst 0 q65537 50 sl. 0 flows (1 buckets) sched 1 weight 0 lmax 0 pri 0 droptail sched 1 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 2 1 00002: 1.800 Mbit/s 0 ms burst 0 [color]q00002 50 sl. 0 flows (256 buckets) sched 1 weight 90 lmax 0 pri 0 droptail[/color] mask: 0x00 0x00000000/0x0000 -> 0xffffff00/0x0000 sched 2 type FQ_CODEL flags 0x0 0 buckets 0 active FQ_CODEL target 7ms interval 100ms quantum 2000 limit 10240 flows 2048 ECN Children flowsets: 4 3 # ipfw queue show q00001 50 sl. 0 flows (256 buckets) sched 1 weight 10 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffff00/0x0000 q00002 50 sl. 0 flows (256 buckets) sched 1 weight 90 lmax 0 pri 0 droptail mask: 0x00 0x00000000/0x0000 -> 0xffffff00/0x0000 q00003 50 sl. 0 flows (256 buckets) sched 2 weight 10 lmax 0 pri 0 droptail mask: 0x00 0xffffff00/0x0000 -> 0x00000000/0x0000 q00004 50 sl. 0 flows (256 buckets) sched 2 weight 90 lmax 0 pri 0 droptail mask: 0x00 0xffffff00/0x0000 -> 0x00000000/0x0000
What is wrong ? Thanks for your help
-
I believe I have everything setup per post #120 and beyond. My output for ipfw sched show looks correct, shellcmd is all set, router rebooted. I'm testing on a network where no other traffic is going on except my computer. My buffer bloat for downloading on dlsreports has improved greatly (300+ down to 51ms avg) but I can't seem to get rid of the bufferbloat for the upload (value slides between 300-1000ms depending on the bandwidth value I select for the limiter). My connection is slow by comparison to most of the folks I've seen in thread (15 down / 1 up) - I'm just wondering if I will ever be able to completely dial out the bufferbloat on a slow link like mine or do I just need to keep experimenting with different bandwidth values.