Playing with fq_codel in 2.4
-
That's a very interesting point. I didn't explicitly select HTTP, and I use the HTTPS Everywhere plugin, so I quite possibly was testing with HTTPS. I'm at work, but when I get home this evening I'll run a test with HTTP and get back to you. Are you aware of a specific reason why that may make a difference, or just know that it does based on your testing?
-
I ran some tests with flent/rrul, and it doesn't show any issue at all. I also ran a rrul with the limiters disabled. The bad results with limiters disabled confirmed that basic fq_codel is working. :)
Of note is that rrul seems to be hard coded to use 4 streams only. Testing with DSLReports restricted to 4 streams (http/websocket) doesn't show an issue either.
DSLReports with 8 streams is marginally okay. Above 12 streams or so, DSLReports starts to tank. By the time it gets to 24 streams, it's digging big holes in the ground.
As it seems to be an issue with the number of simultaneous streams, I'm going to try multiple lan systems running simultaneous tests next.
-
dennypage
http://www.dslreports.com/speedtest/31667858 plain httpBut I found that wifi clients have different situation — bufferbloat is rated like "С" and I remember that last summer I have activated QoS on the wifi access point to eliminate this problem and now looks like it does not work anymore, I don't think it's related to pfSense, but may be to wifi client that have updated against spectre/meltdown android kernel. I
-
In general my testing is wired, both Linux (Chrome & Firefox) and Mac (Safari and Chrome). I've tested wireless with Mac and the results appear similar at first blush, but I haven't done a lot of testing there.
-
I would expect that with wireless, all bets are off. I run all my testing wired as well. I did run one test last night with 24 streams each up and down and still didn't see degradation on the HTTP dslreports test. I forget what hardware you were running with. Have you monitored CPU usage while running the tests? That seems like a long shot for sure, but if I understand correctly, more flows equals more queues, so queue management becomes a higher burden.
-
My testing was also wired. 32 streams with Hi-Res BufferBloat.
@dennypage:In general my testing is wired, both Linux (Chrome & Firefox) and Mac (Safari and Chrome). I've tested wireless with Mac and the results appear similar at first blush, but I haven't done a lot of testing there.
If I were you, I would try to disable all shapers, limiters, leaving just plain firewall and let it run again, to see is it worse or better. I just think that it can be overloaded ISP upstream router, but the simpliest way to check it is to downgrade pfSense or connect some laptop directly to your ISP modem or router and re-test.
-
Hey Guys have been playing around some more. could someone please help with my config it seems that my pipe is not selecting the correct que
I have tried please see attached screenshot. all seems to be working correct except for me not being able to use the queues on the in/out
on firewall rules problem still persists after reboot. thanks in advance for the help.
-
I forget what hardware you were running with. Have you monitored CPU usage while running the tests?
It's a 4860. Peak CPU @ ~25%, average under 20%.
-
@w0w:
If I were you, I would try to disable all shapers, limiters, leaving just plain firewall and let it run again, to see is it worse or better.
Yes, I ran rrul tests with and without limiters enabled, confirming the basic operation of fq_codel with 4 streams.
-
Hey Guys have been playing around some more. could someone please help with my config it seems that my pipe is not selecting the correct que
I have tried please see attached screenshot. all seems to be working correct except for me not being able to use the queues on the in/out
on firewall rules problem still persists after reboot. thanks in advance for the help.I'm not sure that your output is incorrect. At least, it's very similar to mine for the output of ipfw pipe show and my configuration appears to be working based on test results. What's more interesting to me is that your output for the other two commands (ipfw queue show and ipfw sched show) differ significantly from mine. And I don't know which of ours is correct (if either). How did you set your pipes and queues up? I added my two pipes with no masks, then added two child queues to each pipe with masks (source address mask for upload pipe, destination address mask for download pipe).
I did find a rather amusing (albeit unhelpful) posting that validated my impression of the output of these commands as . . . inscrutable: https://lists.freebsd.org/pipermail/freebsd-net/2017-March/047705.html
-
Hey Guys have been playing around some more. could someone please help with my config it seems that my pipe is not selecting the correct que
I have tried please see attached screenshot. all seems to be working correct except for me not being able to use the queues on the in/out
on firewall rules problem still persists after reboot. thanks in advance for the help.I'm not sure that your output is incorrect. At least, it's very similar to mine for the output of ipfw pipe show and my configuration appears to be working based on test results. What's more interesting to me is that your output for the other two commands (ipfw queue show and ipfw sched show) differ significantly from mine. And I don't know which of ours is correct (if either). How did you set your pipes and queues up? I added my two pipes with no masks, then added two child queues to each pipe with masks (source address mask for upload pipe, destination address mask for download pipe).
I did find a rather amusing (albeit unhelpful) posting that validated my impression of the output of these commands as . . . inscrutable: https://lists.freebsd.org/pipermail/freebsd-net/2017-March/047705.html
Hey TheNarc please see screenshots. appologies i cannot give my speeds as they are very unstable currently 5/20 but it ranges from 10/10 5/10 8/10- upload usually way high than download.
Basically Created the 2 Pipes LAN/WAN and their children queues all with masks matching my subnet
The WAN-Pipe Using Source Address and LAN-Pipe Using Destination.
Added A Floating Rule to allow all with DefaultLimiter in/out queues. [ strange thing is if i use my Low-Queue there is not speed difference ] do the queues only work if there is alot of traffic.?
although according to DSLReports there is definitely something happening as im now getting AAA A+++ Depending on conditions as I do not have the most stable connection and speed varies.
I want to use Time Based Queues as there are certain times of the day / evening when the LTE Bands get congested its always the same time so I will be able to time based drop speed to favor latency and avoid bloat. As me saying it is for sure doing something but hell knows what lol there is not much info to go on, my cmdshell cmd is as follows. Note no ECN on Upsteam.
ipfw sched 1 config pipe 1 type fq_codel target 5 noecn quantum 300 limit 600 && ipfw sched 2 config pipe 2 type fq_codel target 5 ecn quantum 300 limit 600If anyone can think of anything I might have missed or done wrong please post as for others to see thanks very much for the input guys
-
Hey Guys have been playing around some more. could someone please help with my config it seems that my pipe is not selecting the correct que
I have tried please see attached screenshot. all seems to be working correct except for me not being able to use the queues on the in/out
on firewall rules problem still persists after reboot. thanks in advance for the help.I'm not sure that your output is incorrect. At least, it's very similar to mine for the output of ipfw pipe show and my configuration appears to be working based on test results. What's more interesting to me is that your output for the other two commands (ipfw queue show and ipfw sched show) differ significantly from mine. And I don't know which of ours is correct (if either). How did you set your pipes and queues up? I added my two pipes with no masks, then added two child queues to each pipe with masks (source address mask for upload pipe, destination address mask for download pipe).
I did find a rather amusing (albeit unhelpful) posting that validated my impression of the output of these commands as . . . inscrutable: https://lists.freebsd.org/pipermail/freebsd-net/2017-March/047705.html
Hey TheNarc please see screenshots. appologies i cannot give my speeds as they are very unstable currently 5/20 but it ranges from 10/10 5/10 8/10- upload usually way high than download.
Basically Created the 2 Pipes LAN/WAN and their children queues all with masks matching my subnet
The WAN-Pipe Using Source Address and LAN-Pipe Using Destination.
Added A Floating Rule to allow all with DefaultLimiter in/out queues. [ strange thing is if i use my Low-Queue there is not speed difference ] do the queues only work if there is alot of traffic.?
although according to DSLReports there is definitely something happening as im now getting AAA A+++ Depending on conditions as I do not have the most stable connection and speed varies.
I want to use Time Based Queues as there are certain times of the day / evening when the LTE Bands get congested its always the same time so I will be able to time based drop speed to favor latency and avoid bloat. As me saying it is for sure doing something but hell knows what lol there is not much info to go on, my cmdshell cmd is as follows. Note no ECN on Upsteam.
ipfw sched 1 config pipe 1 type fq_codel target 5 noecn quantum 300 limit 600 && ipfw sched 2 config pipe 2 type fq_codel target 5 ecn quantum 300 limit 600If anyone can think of anything I might have missed or done wrong please post as for others to see thanks very much for the input guys
So, with the caveat that I am quite new to playing with dummynet and fq_codel, I'm fairly certain that you don't want masks on your pipes (limiters), but only your queues. As I understand it, the distinction, as gleaned from the FreeBSD man page for ipfw (https://tinyurl.com/jfzok5z), is that masks set of pipes result in dynamic pipes and masks set on queues result in dynamic queues:
when dynamic pipes are used, each flow will get the same bandwidth as defined by the pipe, whereas when dynamic queues are used, each flow will share the parent's pipe bandwidth evenly with other flows generated by the same queue (note that other queues with different weights might be connected to the same pipe).
That said, since you have masks on your pipes that match your entire subnet, maybe you're effectively achieving the same thing (i.e. you end up with only one download limiter and one upload limiter, because your masking only ends up defining one flow which is your entire subnet). Functionally, though, I would expect that to be identical to just having no masks on your pipes.
On the other hand, I think you want /32 masks on your queues, so that each host on your LAN gets its own queue.
Or I could be entirely wrong :D Here's hoping someone more knowledgeable than I will chime in . . .
-
Ok coolies. I tried the suggested it seems to show traffic now passing through the queues although still im not sure if the queue weight is working.
bandwidth still tops off at max the only way is to create a LAN Rule passing everything through the pipes although the damn queues still don't work
even setting weight to 1 sill gives full speed on speedtests. only way to let it limit bandwidth is to actually lower speeds on the pipe itself..
Any Ideas would be greatly appreciated thanks so far guys.Note : using the single pipe does work great that is what had my initial thoughts saying its looking good and working its still working but want to assign
the queues to aliases of high bandwidth users on the network as to throttle them. and im not sure if it can even be done this way.. -
Ah okay I think I can help with this. The weights on child queues dictate the relative share of the parent pipe in the event that the pipe is maxed out. For example, suppose you have a pipe with a bandwidth limit of 100Mbps and two child queues for it: one with a weight of 10 and the other with a weight of 90. If the queue with a weight of 90 is totally idle, the queue with a weight of 10 will be allowed to use all 100Mbps of the parent pipe. The weight of 10 only means that (in this example) the queue is guaranteed at least 10Mbps of bandwidth; so it's a floor, not a ceiling.
If you want to limit certain hosts on your network to never have more than a certain amount of bandwidth, you'll need to use multiple pipes; they provide a ceiling.
-
Hey TheNarc. thanks for the help. Im stating to sort of understand, lemme try ring this back to you.
so if I have a 10/10 connection and want to "sort of" restrict certain users using aliases, I would do some thing like this:
Create maybe 6 Pipes 3in and 3out pipes each with say 2Queues, with the weight of say 90 and 25 having the pipes being masked to direction/32
–---------------------------------------------------------------------------------------------
--- Wan/Upload Pipe 1 = 8Mb / Pipe 2 = 4Mb and Pipe 3 = 1.5Mb
-- QueueHigh = 90 and QueueLow 25
--- Lan/Download Pipe Pipe 1 = 8Mb/ Pipe 2 = 4Mb and Pipe 3 = 512Kb
-- QueueHigh = 90 and QueueLow 25Aliases : DefaultTraffic // HighTraffic // LowTraffic
Representing the Group of IP's / Computers requiring the rules
Have some rules for services 80 / 443 on Default
Have some rules for NNTP 119/563 on LowFIREWALL/LAN/RULES
IPv4 * LAN net * * * * none Default allow LAN to any rule = Have Pipes not Queues assigned.Floaing Rules :
IPv4 * DefaultTraffic * * * * none Assigned IN/OUT Pipes = 8Mb Pipe with 2 Children Queues as IN/OUT.
IPv4 * HighTraffic * * * * none Assigned IN/OUT Pipes = 4Mb Pipe with 2 Children Queues as IN/OUT.
IPv4 * LowTraffic * * * * none Assigned IN/OUT Pipes = 1.5Mb Pipe with Children Queues as IN/OUT.
–-----------------------------------------------------------------------------------------
ShellCmd :
Either :ipfw sched 1 config pipe 1 type fq_codel target 5 noecn quantum 300 limit 600 && ipfw sched 2 config pipe 2 type fq_codel target 5 noecn quantum 300 limit 600 && ipfw sched 3 config pipe 3 type fq_codel target 5 noecn quantum 300 limit 600 ipfw sched 4 config pipe 4 type fq_codel target 5 ecn quantum 300 limit 600 && ipfw sched 5 config pipe 5 type fq_codel target 5 ecn quantum 300 limit 600 && ipfw sched 6 config pipe 6 type fq_codel target 5 ecn quantum 300 limit 600
OR:
ipfw sched 1 config pipe 1 type fq_codel && ipfw sched 2 config pipe 2 type fq_codel && ipfw sched 3 config pipe 3 type fq_codel && ipfw sched 4 config pipe 4 type fq_codel && ipfw sched 5 config pipe 5 type fq_codel && ipfw sched 6 config pipe 6 type fq_codel
would a configuration something like this work with FQ_Codel..? and would I leave the shellcmd as is or strip the parameters out note the top ipcf command has NOECN on WAN/IN Pipes.
and would it be advised to run it like this. please if possible could you maybe provide a guide from start to finish if it is not too much to ask which uses multiple pipes and multiple queues.
I have notices that if I do it this way that the queues type do not get set as FQ_Codel but a dynamic wf2q+ queue if I saw correct..
Thanks in advance for the help .. -
I feel that I should emphasize that I'm not any sort of expert on this subject, so I don't want to lead you astray :) But . . . the following is based on my current understanding:
If the only thing you care about is limiting (capping) bandwidth, either globally or on a per-host basis, then all you need are pipes. You only need queues underneath your pipes if you care about how the pipes' limited bandwidth is distributed among various hosts. Let's start with your example, focusing for now only on the pipes. You don't want the cumulative bandwidth of your "upload pipes" to be more than your ISP's upload cap, and the same goes for download. In fact, the prevailing rule of thumb for preventing bufferbloat seems to be that you don't want it to be more than about 85% of your ISP's caps.
The difficult thing about restricting bandwidth on a per-host basis is that I am unaware of a means by which you can, for example, restrict one host to never use more than 2Mbps of upload bandwidth while also allowing other hosts to use all of your available upload bandwidth in the event that the one restricted host is not using its 2Mbps. Because if you make two pipes, one 8Mbps and one 2Mbps, then any hosts directed through the former will never be allowed more than 8Mbps and any hosts directed through the latter will never be allowed more than 2Mbps.
But this would also be an unusual goal, as far as I can tell. Because you can easily use queues to configure things such that a host is allowed to use as much upload bandwidth as it wants so long as no other hosts are using any, but be throttled down to 2Mbps if other hosts start consuming upload bandwidth. I'm going to assume from here on that this is acceptable to you, but if not, let me know.
I would have only a single upload pipe and a single download pipe, maybe try setting both to 9Mbps at first and lower that if you still see unacceptable bufferbloat from the dslreports speed test. Don't set a mask on either pipe. Next add child queues to each pipe. I'll assume that a "high priority" and a "low priority" queue are sufficient, but you can add as many as you want. I believe that the weights of all child queues should add up to 100, but the system may not enforce this. Nevertheless, it makes things clearer. For example, your existing queue weights add up to 115. If the system didn't give you any error, then I have to assume that the queue with weight 90 is being given 90 / 115 or ~78% of its parent pipe's bandwidth and the queue with weight 25 is being given ~22%. But that's confusing. So I would, as an example with 2 child queues, give the high priority queue a weight of 80 and the low priority queue a weight of 20. For queues under your upload pipe, set a source address mask of 32; for queues under your download pipe, set a destination address mask of 32. That results in "dynamic queues" such that each host will get its own queue, rather than every host on your network being funneled into the same queue. I must admit that this is one of the points on which I am least clear myself, because I don't know how the queue weights come into play then. My expectation would be that however many dynamic queues are spawned from a queue with a weight of 10, that weight would be equally distributed among them, but I'm not certain. Referring back to our example of two child queues, one with weight 80 and the other 20, suppose that 4 hosts are currently being directed to the queue with weight 20. So with the settings I have described, that should result in 4 dynamic queues, one for each host. But is each of those queues then guaranteed a minimum of 5% of the bandwidth of the parent pipe (as I expect) or 20%? I'd need to do more testing/research to know for sure.
Coming back out of the weeds a bit . . . once you have your pipes and child queues set up as described above, you need to assign traffic to them using firewall rules, as you already know. And you want to assign in rules for your LAN interface, inbound direction. If you run any servers and may have connections initiated by inbound traffic on your WAN interface, then you'd want rules there too. But if that's not the case, then you only need to match on inbound LAN traffic, setting the "in pipe" to the desired upload queue and the "out pipe" to the desired download queue. Note that if you do need to match on inbound traffic on your WAN interface, this is reversed, because inbound traffic on the LAN interface is upload while inbound traffic on the WAN interface is download.
Your shellcmd examples look fine to me, although obviously they won't be as long if you only have two pipes. I can't provide sound advice on whether you should tweak the default settings for things like quantum and ECN. I'd just experiment and see what works best for your connection.
With respect to your question about fq_codel vs. wf2q+, you should see fq_codel in the output from ipfw sched show after those shellcmds have been run. I did notice that a filter reload seemed to revert back to wf2q+, so I made two shellcmds, one of type "shellcmd" and one of type "afterfilterchangeshellcmd". I hadn't seen any references to anyone else doing this though, so it may or may not truly be necessary, but I don't think it can hurt anything either.
So, looking back on this, it's kind of jumble, and definitely reflects my initial disclaimer that I don't understand all of it myself :) Hopefully it will at least be somewhat useful. I'm still hoping for a real dummynet guru to step in . . .
-
lol :P thanks at least that leads me to a step into a direction, I will try out what you suggested and post back at the moment it is not possible
for some reason from 19:00 - 21:00 the ping goes up to 100ms locked every day im still trying to get them to give an explanation to what is going on.
as it is not only my connection it seems like it is the whole LTE network from the provider. maybe they are too congested but am not really sure as it
has been going on now for a month. as soon as I try the setup I will let you know whats happening from my side. thanks again. -
Ok so I tried as advised and seems to be working although my service is so bad at the moment will have to post back later with results.
Thanks Guys -
TheNarc this quote has me thinking.
when dynamic pipes are used, each flow will get the same bandwidth as defined by the pipe, whereas when dynamic queues are used, each flow will share the parent's pipe bandwidth evenly with other flows generated by the same queue (note that other queues with different weights might be connected to the same pipe).
I will give a scenario.
One device on the network is running a steam download as fast as it can 32 download threads.
Other one has a single youtube video playing.With a 0 ipv4 mask, both devices share a queue.
As I understand it this would give the steam device a 32/33 portion of the bandwidth. Each thread on the steam device would get 1/33 of the queue's (and pipe) bandwidth as it would have 33 flows.
With a 32 ipv4 mask, both devices have their own queue.
The queues are allocated half bandwidth each assuming equal priority?
So steam threads get total of 16/32 of bandwidth of pipe or rather 1/2, and each thread effectively has 0.5/32 of pipe share, whilst the youtube video on the second device if its able to saturate the bandwidth gets 1/2 the pipe to itself.
Do I understand this right that queues share bandwidth in this manner? if yes I prefer the masking per host for sure.
-
I can confirm that your understanding matches my understanding. What I can't confirm is that my understanding is correct :D It's obviously a pretty confusing topic. Because of course you can have more than one child queue per pipe as well, and any child queue may be dynamic or not. In my setup right now, for example, I have two pipes (upload and download) and two child queues for each pipe.
Consider only my download pipe. It has two child queues: one with a weight of 30 and one with a weight of 70 (for low and high priority traffic respectively). Both child queues are dynamic, with /32 masks on the destination address. Based on my understanding, this means that every host on my LAN should get its own queue.
Now, if that's true, suppose I have 5 hosts that are downloading and directed to my 30-weight "low priority" download queue based on firewall rules and 1 host that is downloading and directed to my 70-weight "high priority" download queue. Each of the 5 "low priority" hosts will get their own queue, but will each of those queues have a weight of 30? My expectation would be no; instead, the weight of the child queue should be equally distributed among however many dynamic queues are spawned from it. So in this case, there would be 5 dynamic queues each with a weight of 6 spawned from the "low priority" child queue of weight 30 and 1 dynamic queue of weight 70 spawned from the "high priority" child queue of weight 70.
Maybe that's not exactly how it works, but my hope is that it's at least conceptually accurate. Because it wouldn't make sense if dynamic queues each had the same weight as the child queue from which they were spawned. In the example above, I'd end up with 5 queues of weight 30 and 1 queue of weight 70. So collectively, my 5 low priority hosts would be getting a share of (150/220) or roughly 68% of the parent pipe, and my 1 high priority host would be getting roughly 32%. That situation would turn on its head my original intention of reserving 30% of a saturated pipe for low priority hosts and 70% for high priority hosts.
I don't know if these ruminations are helpful or simply add to the confusion . . . but at least it's fun trying to think it through ;) Still hoping for a true dummynet prodigy to poke his or her head into this thread.