Our Sites become unavailable randomly
-
em1 is our WAN and em0 is our LAN
-
so ssh into pfsense twice
in one of the shells cd to /tmptcpdump -i em1 -w wancap.pcap
On second shell cd /tmp
tcpdump -i em0 -w lancap.pcap
then after a few minutes after you have tested trying to get to your site cntrl c both of those - download the files to your fav sniffer wireshark for example and take a look see.
Or I happy to take a look at them as well.
-
If you do it via SSH (which is handy because you can run both simultaneously) I'd make that:
tcpdump -i em0 -s 0 -w lancap.pcap
adding the '-s 0' so it grabs the entire frame and not just the first 96 bytes. Probably won't matter either way in this case, but not capturing so long that it's necessary to trim the frames and it could prove helpful to have it all.
-
What sort of disk is being used (HDD, SSD, USB Flash, Flash Card, etc.) and age? Maybe something got corrupted.
I use USB Flash drive (on the second one now). When the first one started going bad weird things would start to happen with pfSense until is was rebooted. Bunch of disk errors would be "fixed" and things would be fine for a few weeks. Wash, rinse, repeat.
Have you tried reinstall?
Make config backup to restore after the reinstallation. Unless the config is simple enough to redo by hand.
If you have spare disk you could swap that into the machine for the reinstall so as to retain the current install in case things go badly.
-
valid point cmb, always hate it when missing details in the capture.. But just looking to see if traffic is being forwarded doesn't really matter the snap length.. But yes I agree always better to grab it all.
-
The install is running off 6 18gig 10000 rpm scsi drives that are raid 5.
I did re install with another server and different hard drives, same issue.
After the last drop yesterday I disabled PFBlockerNG, we have had no issues since then. This morning I actually uninstalled PFBlockerNG.
Will update if things change.
-
I keep seeing this in the log:
Jun 4 19:09:59 WAN Block private networks from WAN block 192.168/16 (1000001584) Icon Reverse Resolve with DNS Icon Easy Rule: Add to Block List 192.168.1.112:5351 Icon Reverse Resolve with DNS Icon Easy Rule: Pass this traffic 224.0.0.1:5350 UDP
192.168.1.112 is the IP of our main PFSense box. Any idea how to get rid of it?
-
Well that is multicast traffic I believe bonjour on port 5350 - or NAT-PMP status announcements.
You have UPnP enabled on pfsense?
6 18gig drives in a raid 5?? WTF?? For a firewall?
How old are those drives? Why would you not just run 2 in a mirror? Not like you need space, etc..
-
Yes we have UPNP enabled, not sure why it would be blocking it's own traffic?
We have a bunch of these older servers with 6 18 gig hd in them. Since they are old and we have a lot of them I figure why not since it's the most reliable way to run them.
I had thought progress was made since it seemed to stay up all weekend long. However yesterday as I was away from the office the problem returned. There seems to be nothing in the logs at all during the time it happened.
-
Yes we have UPNP enabled, not sure why it would be blocking it's own traffic?
Because you have your WAN and LAN interconnected somewhere, which is bad. Block private networks on WAN is blocking it because it's a private source IP on WAN. Fix your network so WAN and LAN aren't on the same broadcast domain.
That could be contributing to the problem you're seeing, or potentially the cause of it depending on what other network brokenness you have. But given no useful data gathered yet again at the last instance of the problem, there's no telling.
-
I'm not sure how I could have my lan and wan interconnected. I have separate network ports for each.
Where could I check and what would I be looking for if it's a configuration issue?
-
Interconnected at the switch level, unrelated to the firewall. Maybe your drop from your provider is plugged into the same switch as your LAN hosts, with no VLAN or other isolation.
-
@cmb:
… there's no way a hardware failure would discriminate between diff types of traffic. ...
Not entirely true
My Intel NIC supports scheduling interrupts differently based on TCP ports.
-
@cmb:
… there's no way a hardware failure would discriminate between diff types of traffic. ...
Not entirely true
My Intel NIC supports scheduling interrupts differently based on TCP ports.
Sure, if you're configuring something along those lines. But we don't touch anything like that in NICs at this point. For anything we configure today, a hardware failure would not discriminate between diff types of traffic in the way OP describes.
-
I just checked the cabling coming from our ISP switch, there is definitely only two cables going from that, one going into each of our PFSense boxes.
-
Correct, I just wanted to point out that there are some strange corner cases that really don't apply almost ever, but could exist.
-
I am using 1:1 mapping for three of my external IPs. I also have a rule allowing port 80 on one of those IPs.
This is the port that is going down.
I delete my rule. Then added a port forward and let it create the rule.
When I did this the website opened up again. I hope this has some lasting affect.
-
I am using 1:1 mapping for three of my external IPs. I also have a rule allowing port 80 on one of those IPs.
This is the port that is going down.
I delete my rule. Then added a port forward and let it create the rule.
When I did this the website opened up again. I hope this has some lasting affect.
That's nothing more than a coincidence.
This happened again, and yet still not a single packet capture while it's happening? There's a reason that's what I and others in this thread asked for multiple times, weeks ago. Again, you need to capture the traffic while it's occurring so you can see what's actually happening and share same with some of us, so we can troubleshoot it for you. Anything other than doing that isn't troubleshooting the problem, it's poking at stuff hoping some unknown issue will go away by pushing random buttons which likely have no relation to the issue at all, which is absolutely not going to be successful.
-
So my sites have been up all weekend since making that change.
Will update again after some time.
Also to all the people complaining that I am not providing information. I'm not going to do a bunch of extra work using methods I am unfamiliar with unless it is absolutely necessary. So far it's been much easier to just reboot the firewall and get the site back online.
The fact that problems like this are not present in the log kinda says something about the lack of logging. I shouldn't have to wireshark to try to figure out whats up with our firewall, that's just seems ridiculous.
-
But would you learn from doing so??