Multi-LAN traffic shaping
A little background first.
I have a multi-LAN setup at home. There are three LANs on one physical interface of my WatchGuard X750e, using VLANs to separate the following LANs:
LAN (Native VLAN 1)
Main LAN with servers
Currently hosts VoIP clients, although these will be moved to LEGACY in due course.
WPA2 Enterprise protected WiFi access etc
GUEST (VLAN 20)
WPA2 for visitor wireless access allowing only access to the internet, suitably fire walled from the other LANs
LEGACY (VLAN 21)
WiFi LAN for a couple of old devices which only support WPA and so have a wireless network of their own, suitably fire-walled from the other LANs. Once I get a VLAN aware switch I may move VoIP wired devices onto this network.
It would seem to make a lot of sense to use traffic shaping so that guests connecting to the WPA2 WiFi can't hog so much of my WAN connection that VoIP traffic (or anything else of importance on the other LANs) suffers.
I therefore used the wizard to setup the traffic shaping thus.
I have a single WAN and multi LANs, so selected the second option.
I then entered the bandwidth of my WAN, which is approx 39Mbps down, and 6Mbps up.
Completing the wizard, enabling VoIP support and giving DNS a higher priority, I ended up with the following queues.
Ignore PORT4 at the bottom; that just another port on the router that I use for occasional experiments and isn't in normal use.
This is where the confusion kicks in. I'm trying to share bandwidth between LANs, so I'd expect each to be assigned a proportion of the available bandwidth.
The entry for LAN is:
The qLink entry under LAN is:
The qInternet entry under LAN is:
Now, I'd have expected the LAN entry (and the GUEST and LEGACY entries) to have some proportion of the available bandwidth assigned. No such assignment exists. qLink is assigned 20%, but what is that 20% of as it's parent has no defined bandwidth. Then the qInternet entry claims to have 39Mbps to share out amongst its child queues, but so does the qInternet entry under GUEST and LEGACY. The WAN capacity is 39Mbps, not the aggregate of the three LANs all claiming to have 39Mbps.
Can somebody please explain this.
Also, why is it that the WAN queue (which has the expected 6Mbps limit) has the queues directly under it, including a qDefault, whereas the wizard creates the qLink and qInternet entries for each LAN. What is the reason/justification for this asymmetry?
The way these queues are defined seems counterintuitive to me. The LAN has limits of sharing a single 1Gbps link. The WAN has the limit of 39Mbps down and 6Mbps up. Why then, do the LANs even mention the 39Mbps limit? It seems that the limits are expressed in terms of the max bandwidth that can be transmitted onto the network, but clearly the LANs can accommodate a total of 1Gbps, not 39Mbps. Is that the reasoning behind this nomenclature?
Finally, the wizard has created identically named queues for each LAN. Each therefore has a qVoIP queue, and there is a single floating rule created using this name. Are there actually three qVoIP queues or a single one?
![Screen Shot 2014-05-24 at 16.16.05.png](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.16.05.png)
![Screen Shot 2014-05-24 at 16.16.05.png_thumb](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.16.05.png_thumb)
![Screen Shot 2014-05-24 at 16.16.34.png](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.16.34.png)
![Screen Shot 2014-05-24 at 16.16.34.png_thumb](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.16.34.png_thumb)
![Screen Shot 2014-05-24 at 16.21.17.png](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.21.17.png)
![Screen Shot 2014-05-24 at 16.21.17.png_thumb](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.21.17.png_thumb)
![Screen Shot 2014-05-24 at 16.22.19.png](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.22.19.png)
![Screen Shot 2014-05-24 at 16.22.19.png_thumb](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.22.19.png_thumb)
![Screen Shot 2014-05-24 at 16.22.40.png](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.22.40.png)
![Screen Shot 2014-05-24 at 16.22.40.png_thumb](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.22.40.png_thumb)
![Screen Shot 2014-05-24 at 16.22.59.png](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.22.59.png)
![Screen Shot 2014-05-24 at 16.22.59.png_thumb](/public/imported_attachments/1/Screen Shot 2014-05-24 at 16.22.59.png_thumb)
I deleted the qLink queue, and created a new qDefault default queue under qInternet. I've then been able to time the qInternet upper limit to 38Mbps such that ping times (ICMP assigned to qOtherHigh) is low under max load. Attempting to increase this to 39 sees pings increase from 13/27/47 min/ave/max to 38/46/55, so this tuning makes quite a difference. This is also very noticeable when making a VoIP call.
That bandwidth does seem to get shared between the LANs, but I'm not sure how to give less bandwidth to one LAN compared to another.
Now I need to explore m1/delay/m2 tuning.
Hey there, did you ever find an answer to this?
I'm currently wondering the same thing when I create a CBQ QoS scheme.
What is the function of the qLink queue? Why is created in this specific hierarchy?
Anyone else have any help they could offer?
No, I never got a response to this, nor could I find any decent information anywhere else. This is clearly poorly understood functionality! Using the tuning I described above I have achieved the desired traffic shaping however this hasn't allowed me to control the division of bandwidth between my local subnets. My queues appear as shown below, where each of LAN/GUEST/LEGACY have seemingly independent bandwidths, however, if I load up both LAN and GUEST, for example, the 38Mbps limit is honoured and the VoIP queue on my LAN interface gets the bandwidth it requires.
I'm really not entirely sure how this config is working!
![Screen Shot 2014-11-03 at 23.54.48.png](/public/imported_attachments/1/Screen Shot 2014-11-03 at 23.54.48.png)
![Screen Shot 2014-11-03 at 23.54.48.png_thumb](/public/imported_attachments/1/Screen Shot 2014-11-03 at 23.54.48.png_thumb)
KOM last edited by
I would chime in if I had something to offer, but I have no experience with multi-WAN or multi-LAN and pfSense Traffic Shaping.
KillerB: qLink is the name of the default queue on the LAN side.
The queues on the WAN interface are listed like that because they are sharing that whole 6Mbits. The reason that qLink and qInternet are under the LAN interface is that as it is local and if you do not set a speed on the interface , it goes by what is detected by the NIC. You dont want to limit the speed of the LAN for PC to PC transfers or have that traffic be shaped.
Delete the qDefault under qInternet and put the qLink back under each interface. So under each interface you would have this:
qLink is the default queue so it gets 20% of the root interface queue as by design you will have all kinds of traffic running that are not hitting rules so 20% if the LAN interface speed is what it is getting.
qInternet is where you are giving it what bandwidth you have to the Internet. You are saying that for qInternet and all of its child queues , you are allocating 38Mbits out of total LAN interface bandwidth amount.
Now if you have 3 LAN's going to the internet then I would divide it up so that each has a certain amount of that 38Mbits. Giving them each 38Mbits means you are telling Pfsense you have 114Mbits to the Internet which you do not. if I were doing it I would do:
qInternet on GUEST - 7Mbits
qInternet on LEGACY - 5Mbits - until you move VoIP to it then I would adjust it a bit.
qInternet on LAN - 26Mbits
I would recommend turning off the Codel part on qInternet queue.
Try that out and see how it works for you. I hope this helps.
KOM last edited by
But if you have multi-LAN and limit each LAN to it's proportional share of the WAN, then are you not essentially setting an upper limit for each LAN? If you have a 40 Mb link and 4 LAN queues and giving each LAN queue 10 Mb, then if you have a busy LAN and 3 quiet ones, you are limiting the busy LAN to 10 Mb. This all depends on how the WAN/LAN speed settings affect everything. If it's just a value used in calculations and the queue will absorb whatever bandwidth is available, then fine. If it also acts as a hard cap then that's a problem.