Upgrade 2.5.2 to 2.6.0, upgrade success, Limiters not passing
-
Hi Steve,
then, the PfSense plus that use the limiters and that have NO problems (in fact all my installations with PfSense plus have no problems) use a simple configuration. PfSense official bare metal systems (typically the 7100 rack model).The PfSense CE in 2.6.0 that have problems have similar characteristics, almost identical: VM in Proxmox, NIC in virtio, a WAN, an MPLS link, a dozen segments with appropriate ACLs. A few ports in NAT WAN to DMZ (with limiters) and all internal traffic segments to WAN with limiters.
The limiters are configured in tail drop, scheduler in default, mask none and a queue in source address / destination address. Very standard. Configuration that works perfectly with version 2.5.2.In my case there are no problems immediately after a reboot, only after some time the traffic is dropped (both from the WAN to the DMZ, in NAT, and from the internal segments towards the WAN). The problem occurs on multiple firewalls.
Being systems in production I made a rollback to 2.5.2.Luca
-
I am also using DNSForwarder, to OpenDNS. Maybe 50% of DNS calls seem to be failing through my updated PFSense 2.6.0 box. No Limiters, very simple rules. If it's not the DNS Fowarder, something might be wrong with the block bogon default rule somehow?
I've been using this same configuration for multiple releases, I have another box nearby running 2.5.2 with the same configuration, I just hadn't upgraded that one yet, and all computers connected to it are having no problems.
DNS issue seems to be tied to the 2.6.0 version, and I don't believe it's related just to Limiters.
Thanks, I've been using PFsense for about 5 years now, no problems like this. Easily 50% failure rate connecting to anything on 2.6.0
-
You should open a new thread (or use a different open one) for issues with dnsmasq if you're seeing that.
Please keep this thread for issues with Limiters, which I have still been unable to replicate in 22.01 or 2.6.Steve
-
Steve,
If you want , I can send you a full configuration, now working in production on 2.5.2 that presented problems after upgrading to version 2.6. If you think it might be useful
Luca
-
@luca-de-andreis Yes please, hit me in a PM with a link if you can.
-
I am initially receiving ping replies with the limiter enabled, but it is not for 50 pings.
I increased the queue length to 1000 on the limiter. When first applied, I was able to ping 15 times with replies. Then the queue started filling up. This time the queue did in fact fill to 1000 packets before the Limiter Info output indicated anything was being dropped. However the ping replies stopped as soon as packets started entering the queue. That's the end of the road. The packets are never passing out of the queue. Even when I disconnect my test host from the network completely so there is absolutely no more input on the interface, the queue remains full indefinitely until the rules are reloaded.
I tried each of the various Queue Management Algorithms available on the Limiter properties although I must admit I am not familiar with these options. Random Early Detection seems to allow traffic to continue flowing. I see packets being dropped rather than queued on the Limiter Info output. I was able to browse and run several speed tests without losing connectivity. The speed test results were noticeably less than the rates configured in the limiter, but they completed successfully. All other Algorithm choices produced the same result of blocking traffic as soon as packets begin entering the queue.
-
Ok Steve, I've just send link and password to donwload the xml file.
I've removed public IPs and altered password and key digest....Luca
-
I found out that using 2.7.0.a.20220305.0600 devel on vmware test bench, the limiter issue is gone and seems to be working normal.
Previous devel version had the same issue as 2.6.
I also tried the captive portal patch on 2.6 without success regarding the limiter failure. Perhaps this all ties to the IPFW function? -
Is it possible that all functions correctly on PfSense plus 22.01? I have several firewalls with PfSense plus 22.01 with active limiters and with queues, exactly configured as versions 2.6.0 which have problems. But ... what are the differences between PfSense plus 22.01 and Pfsense 2.6.0?
Luca
-
@blikie said in Upgrade 2.5.2 to 2.6.0, upgrade success, Limiters not passing:
I found out that using 2.7.0.a.20220305.0600 devel on vmware test bench, the limiter issue is gone and seems to be working normal.
Previous devel version had the same issue as 2.6.You mean literally between the 4th and 5th or March snaps?
That is when the root fix for the captive portal issue went in. Which would be very interesting!
Steve
-
@stephenw10
Yes, I went from clean install 2.6 to 2.7 devel a week ago and it had the same issue, fast forward to that day i updated to that build and i was surprised with the same config unchanged and working with limiters on. and i thought it might be something to do with the captive portal patch. i ran another fresh 2.6 with the patch applied but the issue persisted only on 2.6. all run in a vmware workstation 16.2.2. -
Ah, that's great to know. Yeah, it's not the same patch.
Ok, that should make it much easier to replicate now we have some idea what's triggering it.
Steve
-
Finally it can be said that the problem has been identified.
Stephenw10 of the Netgate development team has identified the cause of the problem.
The problem occurs when the Captive Portal is active and limiters are used. Practically due to a captive portal bug, even if this refers to a different interface, the limiters do not work correctly on all interfaces (including WAN ones, therefore the ports natted towards the internal segments and subject to limiters).
If you don't use the Captive Portal the limiters work without problems.Luca
-
Yes, exactly that. We are still looking to find a workaround if possible but it's not easy because of where the issue lies.
If anyone is still seeing problems with Limiters when they don't have the captive portal enabled on any interface we want to know about it.
Steve
-
I'm afraid it's not looking too good for a workaround that can be applied as a patch. The interaction between pf and ipfw is at a low level. We are still investigating though.
Steve
-
Steve, do you think a 2.6.1 and a 22.01p1 will be released pending 22.05 (I see it far away) ...
Luca
-
@stephenw10 said in Upgrade 2.5.2 to 2.6.0, upgrade success, Limiters not passing:
The interaction between pf and ipfw is at a low level
and if that is the issue, does that mean that any one who was using FreeBSD 12.3 and wanted to use Limiters ( dummy.net ?) would have an issue ?
Is this issue native to FreeBSD 12.3, or created when Netgate takes the FreeBSD 12.3 source and cooks its own derived version ?
Heck, I don't even know if the FreeBSD used is a patched one (with upstream corrections and / or Netgate's salt&pepper), and then build- or just the source "as is".Anyway, no answer needed, although appreciated. I know issues will get dealt with.
Thanks for the update.
-
@luca-de-andreis Impossible to say yet. We are still looking at it along with a number of other things.
@Gertjan We added code to allow the use of pf and ipfw at thew same time. Normally in FreeBSD you would never do that.
Steve
-
Ok, let me think about that.
With pfSense I was doing just that for the many years. Maybe pfSense is less FreeBSD as I thought it was.
Scary situation, though, as the captive portal is a 'must have' for my usage.
Limiters, on the other hand, should be part of the standard toolkit of what a router firewall has to offer. -
I want to add a +1 as a new pfsense user, I've encountered this issue. I started chasing faulty hardware initially with no luck, but when working back through my changes I found that disabling either captive portal or the floating rules assigning limiters solved the issues I was seeing.
In my case, connected clients would start a speedtest and the test would go from 500mbps to 0, and then all connectivity stopped. This affected all clients and not just the captive portal ones, so I'm reasonably confident I'm affected by this regression too.