WAN stops forwarding
Before opening this topic, I've been searching on the forums, updating the BETA version from time to time waitting for the problem to dissapear, but without success.
The setup is:
Internet - WAN - Pfsense - LAN
WAN:em (PciE added card)
LAN/OPT: bge onboard cards.
On the LAN side, there are 4 servers, with several things: databases, web, svn, etc etc on a datacenter. The need for pfsense arose due to having a lot of services distributed among the servers, and complex relationships between them, both for direct access between servers as for access from the internet (including vpn access to the servers). Having a firewall in front of them would solve this easily. After studying several posibilites, I choosed Pfsense for being built on FreeBSD.
I started migrating a rarely used server (just for getting some statistics) after pfsense. It's just accessed a few dozens of times per week. It works as expected. I gave the server a LAN address, configured an IP alias con the pfsense, and created the neeed NAT rules. It works. Nice. I kept this situation for a pair of weeks after trying to migrate other of the servers, a web server.
I migrated the web server, and configured it in the same way I did with the other one. It's not a quite busy web server (around 30k visits/day) and also has around 2000-3000 connected users, at any time, to a webchat service.
With this server on pfsense, it stops forwarding traffic after a variable amount of time: suddenly, from several hours up to 1-2 days, but never more than 2 days. This happens again and again with every new snapshot of pfsense. It's very easy for me to rollback the configurations so I can always test this when I want to.
I don't know if I am doing something wrong in the same way every single time I reinstall pfsense, or if exists some kind of "bug" or whatever. When the system stops working, the packets appear as blocked on firewall logs. Reloading the filter doesn't help. Just a reboot of the system fixes the situation.
Has anyone reported something like this before? I can provide any needed data and make any kind of tests if needed.
Yes, I know this is not a stable release :) The hardware is a Dell SC1435.
wallabybob last edited by
When pfSense stops forwarding is it possible to connect to the web GUI? What numbers does the dashboard display for states? Does the system log report anything at around the time the "blocked" entries appear in the firewall log?
Perhaps your system is running out of a resource. Some other posts have suggested to me there might be a memory leak associated with the bge driver. Please post the output of the pfSense shell command # netstat -m
GUI still works (via the OPT interface). I'll reconfigure the system again so I can get the details you ask me about, and I'll post them as soon as it fails again. Now, as it's working properly (without enablig the web server), the outpup is:
2050/1535/3585 mbufs in use (current/cache/total)
2049/1495/3544/25600 mbuf clusters in use (current/cache/total/max)
2048/768 mbuf+clusters out of packet secondary zone in use (current/cache)
0/90/90/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
4610K/3733K/8344K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/9/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
But this is after having rebooted the system and putting the webserver on its on way. As I said, I'll put things again to work in order to answer your questions.
wallabybob last edited by
That startup data could be helpful for comparison with data when forwarding stops.
Ok, I've already reverted the webserver to go through pfsense, so now it's just a matter of time :)
Happened again and I've been able to narrow the problem. I'll close this and open a new thread.