Limiter by IP is grouping IP's in common buckets
-
Running 2.2.4 x64 full install. No squid or similar package. Single Wan, single Lan. I can't seem to get the results I think I should in Limiter Info. My limiter settings are;
-
LimitLanIn 6800k with child queue source
-
LimitLanOut 100m with child queue destination
I would expect from this that each source/destination IP would get it's own bucket. Instead one to several IP's share the same bucket. Is this the way it's suppose to work? Seems the grouped IP's would get less than an equal share of the parent bandwidth than buckets that have a single IP, yes?
I tried putting 512 in parent queue size, then parent bucket size, then tried each on the child queue. Bucket count remains at 50 per limiter info. It appears the reason some IP's are grouped into one bucket is due to the 50 slot limit that apparently isn't user defineable. Below is a sample limiter info output for LanOut (downlink). Am I missing something?
Limiters:
00001: 6.800 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 0 active
00002: 100.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 0 activeQueues:
q00002 50 sl. 73 flows (256 buckets) sched 2 weight 1 lmax 0 pri 0 droptail
mask: 0x00 0x00000000/0x0000 -> 0xffffffff/0x0000
32 ip 0.0.0.0/0 192.168.150.113/0 27 1348 0 0 0
32 ip 0.0.0.0/0 192.168.20.112/0 192 284906 0 0 0
32 ip 0.0.0.0/0 192.168.130.113/0 75 47823 0 0 0
33 ip 0.0.0.0/0 192.168.40.113/0 5 3138 0 0 0
33 ip 0.0.0.0/0 192.168.20.113/0 3 225 0 0 0
34 ip 0.0.0.0/0 192.168.20.114/0 1 52 0 0 0
34 ip 0.0.0.0/0 192.168.2.114/0 74 95772 0 0 0
35 ip 0.0.0.0/0 192.168.50.115/0 40 57570 0 0 0
36 ip 0.0.0.0/0 192.168.150.117/0 1 52 0 0 0
36 ip 0.0.0.0/0 192.168.70.116/0 19 12480 0 0 0
36 ip 0.0.0.0/0 192.168.140.117/0 1251 1875137 12 18000 18
37 ip 0.0.0.0/0 192.168.140.116/0 13 652 0 0 0
38 ip 0.0.0.0/0 192.168.60.118/0 297 442612 3 4500 0
41 ip 0.0.0.0/0 192.168.130.120/0 202 300747 0 0 0
53 ip 0.0.0.0/0 192.168.110.101/0 274 356030 0 0 0
54 ip 0.0.0.0/0 192.168.150.103/0 8 1978 0 0 0
55 ip 0.0.0.0/0 192.168.140.102/0 19 8119 0 0 0
55 ip 0.0.0.0/0 192.168.2.103/0 17 962 0 0 0 -
-
Looking at the limiter info closer, more specifically at q00001, which is the Lan uplink, there is a distinct pattern. It would appear that PfSense is matching IP's only by the last octet when it decides what bucket to allocate an IP too. Is this a bug or by design? I suspect this is because I have a dozen or so gateways (subnets) added to the Lan adapter's base subnet of x.x.2.x/24. So, for example, IP 192.168.70.116 and 192.168.20.116 are both going into the same bucket. Not sure why the outbound doesn't follow this example to the letter.
So to get the limiter working dynamically, as intended, will I have to create a child queue for every gateway on the Lan adapter, mask each to a corresponding gateway subnet, then put them all under one parent queue? If so, do I give all child queues the same weight or will it weight evenly if the "weight" field is left blank (default)?
Limiters:
00001: 6.800 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 0 activeQueues:
q00001 50 sl. 78 flows (256 buckets) sched 1 weight 1 lmax 0 pri 0 droptail
mask: 0x00 0xffffffff/0x0000 -> 0x00000000/0x0000
BKT Prot Source IP/port_ Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp
64 ip 192.168.70.116/0 0.0.0.0/0 17 1667 0 0 0
64 ip 192.168.20.116/0 0.0.0.0/0 36 4329 0 0 0
68 ip 192.168.60.118/0 0.0.0.0/0 5 336 0 0 0
72 ip 192.168.20.112/0 0.0.0.0/0 3 936 0 0 0
74 ip 192.168.40.113/0 0.0.0.0/0 1 725 0 0 0
74 ip 192.168.150.113/0 0.0.0.0/0 67 2680 0 0 0
78 ip 192.168.50.115/0 0.0.0.0/0 2 159 0 0 0
78 ip 192.168.20.115/0 0.0.0.0/0 36 4819 0 0 0
78 ip 192.168.130.115/0 0.0.0.0/0 136 23084 0 0 0
80 ip 192.168.110.124/0 0.0.0.0/0 41 8411 0 0 0
80 ip 192.168.20.124/0 0.0.0.0/0 14 862 0 0 0
82 ip 192.168.70.125/0 0.0.0.0/0 2 100 0 0 0
82 ip 192.168.20.125/0 0.0.0.0/0 109 14480 0 0 0
84 ip 192.168.70.126/0 0.0.0.0/0 11 966 0 0 0
86 ip 192.168.140.127/0 0.0.0.0/0 2 144 0 0 0
88 ip 192.168.130.120/0 0.0.0.0/0 822 488084 1 1500 13
90 ip 192.168.60.121/0 0.0.0.0/0 3 495 0 0 0
94 ip 192.168.60.123/0 0.0.0.0/0 4 208 0 0 0
96 ip 192.168.2.100/0 0.0.0.0/0 3 180 0 0 0
98 ip 192.168.20.101/0 0.0.0.0/0 1 52 0 0 0
100 ip 192.168.140.102/0 0.0.0.0/0 25 11456 0 0 0
102 ip 192.168.20.103/0 0.0.0.0/0 135 186177 0 0 0
102 ip 192.168.150.103/0 0.0.0.0/0 47 41181 0 0 0
102 ip 192.168.110.103/0 0.0.0.0/0 4 610 0 0 0
102 ip 192.168.2.103/0 0.0.0.0/0 45 37422 0 0 0
102 ip 192.168.60.103/0 0.0.0.0/0 290 11660 0 0 0 -
Were you ever able to resolve this, or get any further information on why this happens or what this means?
I have 2.3.2-RELEASE (amd64). I am able to go to Traffic Shaper/Limiters. I am able to change the Limiter and child Queue settings for "Queue size (slots)" and "Bucket size (slots)". I have made these all 512 for the relevant limiters and queues.
Under Diagnostics/Limiter Info, I can see that the slots have indeed changed to 512.
However, I still see the Limiters grouping IPs into the same BKT number based on the last octet! This seems like a really arbitrary way to group IPs and would seem to defeat the purpose of the mask/source or destination addresses intention of getting every host the same bandwidth. Is that really what is happening?
-
In case anyone is still wondering about this – I did some testing where in I made several IP address share the same last octet (e.g. 172.16.0.11, 172.16.1.11, and 172.16.2.11) -- they WERE given the same BKT number, implying they were put into the same bucket for no reason other than the fact that they had the same number for the last octet of their IP addresses, but this DID NOT impact each IP's ability to get its own proper share of the total bandwidth. This means that the limiters WERE working as intended, and each source/destination IP WAS getting its properly divided share of the pipe bandwidth.
-
Sorry SSP, had notify on but somehow missed ur first reply. No, I never did get a response and haven't had time to revisited the issue without something to go on. So glad you took the time to test actual behavior of the dynamic limiter. Odd the limiter screen shows buckets being shared.