Upgrade 2.5.2 to 2.6.0, upgrade success, Limiters not passing
-
i mean https 443. check https://1.1.1.1/
-
I don't know if my problem is related to this post .. I have more than 20 PfSense installed, in general the upgrade from release 2.5.2 to 2.6.0 went well, but, on two firewalls that use limiters, updated yesterday , it happened that the natted ports (tcp and udp) were no longer reachable (regulated by the limiters), a downgrade to the 2.5.2 release completely solved the problem.
-
@luca-de-andreis
inbound NAT? Destination NAT?maybe source NAT is the issue with internet access for the other issues here....
-
@tohil
Dnat, from wan to internal segment.
I use limiters massively and the nat traslation doesn"t work (tested on 2 different firewall).
Rollback on 2.5.2 and the same setup works perfectly. -
@luca-de-andreis said in upgrade 2.5.2 to 2.6.0, upgrade success, no internet conection:
the nat traslation doesn"t work
Are you seeing traffic simply not being NAT'd? So no states with NAT opened? Traffic just blocked hitting the WAN with the external address still?
Steve
-
Hi Steeve
Unfortunately the firewalls in question are in full production (we would like to migrate them all to the plus version with advanced support soon). One was updated yesterday morning (2.5.2 -> 2.6.0) the other, yesterday evening (same version change). And everything worked fine for a few hours. Only today we realized that the firewalls were active, but no UDP or TCP port in dnat (from WAN to several internal segments) did not work, we did not investigate the matter, but as quickly as possible we did a version rollback ( all firewalls are in QEMU-KVM and it is our habit to run the snapshots of the VMs for several days post upgrade). So we do not know in detail the cause of the problem, we only have some indications: use of limiters, correct operation in the first hours of operation, if the firewall is restarted the nat (with limiters) returns to work. Rollback to version 2.5.2 and everything is back to working perfectly.
Thanks
Luca -
@thiasaef said in upgrade 2.5.2 to 2.6.0, upgrade success, no internet conection:
I bet my ass that the issue is related to the DNS Resolver being fucked up once again.
I stand corrected, I have the exact same issue in 2.5.2, sorry!
By the way: When I set the DNS server via DHCP to something other than the firewall itself, it works fine on all LAN interfaces.
-
@thiasaef said in upgrade 2.5.2 to 2.6.0, upgrade success, no internet conection:
I stand corrected, I have the exact same issue in 2.5.2, sorry!
Which does not make it any better to be honest. How is it possible that a major issue like this that is known for at least 2 months, makes it into a release like 2.6.0?
-
This thread is getting very confusing, there's at least three different issues being discussed.
It's mostly about Limiters not passing traffic though so let's keep it for that. Please open a new thread for Unbound problems.Steve
-
Hello,
Looks like I am late to the party, but we are also experiencing this issue with our limiters since the upgrade to 2.6.0. I have 2 different limiters configured on 2 different inside interfaces. No NAT in use on either interface. Each interface has a subnet of public routable IPs. The limiters were configured to rate limit specific host IPs within each subnet and worked as expected under 2.5.2. Now they block traffic. Can not even ping from the limited host to the interface IP. Removing the limiter from the In/Out Pipes immediately restores full connectivity.
I have watched the Limiter Info output and it appears to be hard limiting to 50 packets and then dropping everything after that point if I am reading this correctly? Every time I re-add the limiter I can ping until this number hits 50 and then everything stops.
00008: 27.000 Mbit/s 0 ms burst 0
q131080 50 sl. 0 flows (1 buckets) sched 65544 weight 0 lmax 0 pri 0 droptail
sched 65544 type FIFO flags 0x1 256 buckets 1 active
mask: 0x00 0xfffffff8/0x0000 -> 0x00000000/0x0000
BKT Prot Source IP/port_ Dest. IP/port Tot_pkt/bytes Pkt/Byte Drp
168 ip 66.###.###.40/0 0.0.0.0/0 4975779 1355919070 50 10861 2291 -
Hmm, interesting. Seems likely that's because the queue length is 50 packets by default.
If you can try setting the queue length to something longer and see if that changed the number of passed packets.
To be clear you see replies to the first 50 pings?
Hard to see how it could be filling the queue but passing at the same time.Steve
-
Steve I have an update...
I use a good number of pfsense firewalls (more than 20) and I confirm the problem in question that it occurs in version 2.6.0.
But, I also have a good number of PfSense plus updated to the latest version 22.01 and they seem to work perfectly. I'm talking about at least 5 PfSense plus on Netgate appliances (Netgate bare metal). These use limiters and work perfectly. Can it help in understanding the cause?Luca
-
Possibly. I've so far been unable to replicate it on anything. There must be some combination of things that cause it because some people seem to be hitting it with a very basic Limiter setup.
Are you able to get me a status_output or config file from any of the installs that are seeing this?
Or just some way to setup something that repeatably hits it?Steve
-
Hi Steve,
then, the PfSense plus that use the limiters and that have NO problems (in fact all my installations with PfSense plus have no problems) use a simple configuration. PfSense official bare metal systems (typically the 7100 rack model).The PfSense CE in 2.6.0 that have problems have similar characteristics, almost identical: VM in Proxmox, NIC in virtio, a WAN, an MPLS link, a dozen segments with appropriate ACLs. A few ports in NAT WAN to DMZ (with limiters) and all internal traffic segments to WAN with limiters.
The limiters are configured in tail drop, scheduler in default, mask none and a queue in source address / destination address. Very standard. Configuration that works perfectly with version 2.5.2.In my case there are no problems immediately after a reboot, only after some time the traffic is dropped (both from the WAN to the DMZ, in NAT, and from the internal segments towards the WAN). The problem occurs on multiple firewalls.
Being systems in production I made a rollback to 2.5.2.Luca
-
I am also using DNSForwarder, to OpenDNS. Maybe 50% of DNS calls seem to be failing through my updated PFSense 2.6.0 box. No Limiters, very simple rules. If it's not the DNS Fowarder, something might be wrong with the block bogon default rule somehow?
I've been using this same configuration for multiple releases, I have another box nearby running 2.5.2 with the same configuration, I just hadn't upgraded that one yet, and all computers connected to it are having no problems.
DNS issue seems to be tied to the 2.6.0 version, and I don't believe it's related just to Limiters.
Thanks, I've been using PFsense for about 5 years now, no problems like this. Easily 50% failure rate connecting to anything on 2.6.0
-
You should open a new thread (or use a different open one) for issues with dnsmasq if you're seeing that.
Please keep this thread for issues with Limiters, which I have still been unable to replicate in 22.01 or 2.6.Steve
-
Steve,
If you want , I can send you a full configuration, now working in production on 2.5.2 that presented problems after upgrading to version 2.6. If you think it might be useful
Luca
-
@luca-de-andreis Yes please, hit me in a PM with a link if you can.
-
I am initially receiving ping replies with the limiter enabled, but it is not for 50 pings.
I increased the queue length to 1000 on the limiter. When first applied, I was able to ping 15 times with replies. Then the queue started filling up. This time the queue did in fact fill to 1000 packets before the Limiter Info output indicated anything was being dropped. However the ping replies stopped as soon as packets started entering the queue. That's the end of the road. The packets are never passing out of the queue. Even when I disconnect my test host from the network completely so there is absolutely no more input on the interface, the queue remains full indefinitely until the rules are reloaded.
I tried each of the various Queue Management Algorithms available on the Limiter properties although I must admit I am not familiar with these options. Random Early Detection seems to allow traffic to continue flowing. I see packets being dropped rather than queued on the Limiter Info output. I was able to browse and run several speed tests without losing connectivity. The speed test results were noticeably less than the rates configured in the limiter, but they completed successfully. All other Algorithm choices produced the same result of blocking traffic as soon as packets begin entering the queue.
-
Ok Steve, I've just send link and password to donwload the xml file.
I've removed public IPs and altered password and key digest....Luca