TCP:R blocks with open rules
Hello - I am on day 2 of my pfSense life. I got all the base stuff done yesterday and feel good about that. I then moved some more devices behind the pfSense today and now my log is flooded with blocked TCP:R
I tried looking into it via Google and what I found was old content from 2012 and really didn't explain what was going on and a fix.
I installed docker when I first kicked off the VM and at the time the VM was on a 172.27.x.x network. It appears that docker created it's own interface and IP'd as 172.17.0.2 At the time and currently I have no 172.17.x.x network and DHCP was set as a /24 so that would have been the 172.27.x.x network if issued via DHCP but looks staticly set (not strong with linux networking commands yet).
I built pfSense to protect a 192.168.x.x network and moved my VMs to that network this morning. So the server now has a 192.168.x.x address but the docker still has the 172.17.x.x address.
The app is world facing so connections inbound can come from anywhere inbound to :28967
NAT rule is set for port 28967:28967 on ISP WAN from ANY to server (192.168.x.x).
The app appears to be working and I do see traffic in the app.
When I noticed the blocks, I created an alias for the 172.17.0.2 address and put a rule at the top of the LAN zone that is wide open for this purpose. I have noticed that it doesn't have any hits though.
So my guess is that most of the traffic is hitting this lower rule I have in place until everything is configured but that LAN network is configured for the 192.168.x.x network and nothing is configured for 172.17.x.x network except the rule above.
I changed the logging on the default allow rule and see this:
So looking at the traffic, the server responded with TCP:S but a second later the docker IP responded with a TCP:R and was blocked
How do I configure the zones, IPs, networks, whatever to allow for this to become allowed for the docker 172 IP?
I tried some static routes but nothing I did helped, no sure it was right either :)
Because the VM only has one NIC and the docker created it's own network is it binding itself to the 192.168.x.x or something? Like I need to NAT the 172.17.x.x? I'm sorry don't get why it won't hit on the open rules...
It seems bizarre to me that Docker would essentially barge into the /24 you have defined with its own out of scope config. You're not supposed to put different networks on the same wire unless they're VLANs. This link seems to say that Docker NATs the container traffic via the bridge Docker0, so the network should never see anything from 172.17.x.x.
It appears that the docker is internally NATing the traffic as the VMs IP but only part of it. The reset is not being masqueraded in iptables. I am trying to find documents about setting up docker masqurade in iptables to see if this helps.
rml_52 last edited by rml_52
@KOM So all the references I could find say that that behavior is exactly what docker does. it creates a network not in use, creates a bridge, and masquerades all traffic out the eth0 port.
That all seems setup correct but for some reason TCP:R traffic is not being masked.
I tried editing the iptables on the server but logs still are happening...
If anyone is an iptables master and wants to see let me know.
I tried to post pics but the site keeps blocking me as spam
If everything is working correctly and you're just tired of the log spam, you could create your own custom rule to block that traffic and set it to not log.
@KOM I believe everything is working for the apps needed traffic but the state table is filling up and nothing is closing because the reset is blocked.
I created a rule to allow the traffic but it's not working because its not a routed network on the interface and I dont know how to fix that...
Granted the system should be bridging and masking all traffic out eth0 as 192.168.5.2 and not 172.17.0.2 so I am in other forums trying to fix that but also dont want issues from reset blocks and state table. And if they cant fix it on the server then i want the firewall to allow it
Have this been resolved, im sitting with the same issue.
@Moh not on my end... but I gave up trying
not on my end... but I gave up trying
Do you still run docker on that port? i assume you ran Storj?
@Moh yes. It seems that docker is not being masquerading properly and sending traffic using the 172 private IP instead of the the configured IP for eth0 and the pfsense firewall doesnt know how to handle it
@Moh I bumped your rep by one. I think you have to have at least 5 or your posts get filtered a lot harder.
@rml_52 I have fixed my issue today, i have switch my storagenode docker from "bridge" network to use the "host" network. Then allowed the rules on Linux to let traffic through for the port.