Changing TCP rule from PASS to BLOCK, rule error via notification
-
2.1-RC0 (amd64)
built on Mon Jun 17 05:26:57 EDT 2013I duplicated a WAN ipv4 tcp pass rule to an ipv6 TCP block any/any rule and saved it. I got a notification that there was an error.
php: : New alert found: There were error(s) loading the rules: /tmp/rules.debug:194: keep state on block rules doesn't make sense - The line in question reads [194]: block in quick on $WAN inet6 proto tcp from any to any flags S/SA keep state label "USER_RULE: silently drop ipv6 packets"
I did not have state specified on the rule in the GUI for the original(or new) rule(UPDATE… it did have keep state specified in the raw rule... it just doesn't show it in the GUI). I had left it un-expanded. It seems to have kept the 'keep state' associated with that rule in the config from the previous copy of the pass rule when saving it. (Not talking about an actual state table entry... I am talking about the rule definition).
When someone changes a rule from pass to block (or duplicate a pass rule and change it to block) should it clear the state entry before saving it?
Interestingly when this happened all traffic externally didn't work until I deleted the rule.
-
It seems to have kept the keep state from the pass rule when saving it.
When someone changes a rule from pass to block (or duplicate a pass rule and change it to block) should it clear the state entry before saving it?
See Diagnostics -> States, click on Reset States tab. It is often necessary to reset states after "major" changes to firewall rules. I expect it is better to reset states "on demand" than to do it automatically after every firewall rule change.
-
I tried to clarify the original post. I am talking about the rule option being kept from a cloned rule when changed to block and not any actual state from a connection.
I am trying to find ways to replicate the config error now step by step.
-
I found out how to replicate it.
It doesn't matter if it is ipv4 or ipv6.
1. Create rule on WAN. Pass, ipv4, tcp, src 1.1.1.1, dst 2.2.2.2, manually set state to 'keep state' on the rule.
1a. View the rule and you will not see any specific state specified even though it is saved as keep state. No problem so far really. You don't know when a rule specifically specifies keep state or if the rule doesn't specify keep state but it is just using the default which is keep state.
2. Save rules
3. Clone the rule you created in step 1 and change Pass to Block for this new cloned rule.
4. Save the new rule.
5. Apply changes.
6. Click on Firewall->Rules menu item to refresh the rules listing. If you keep doing this for 10 or 20 seconds you will eventually get a notification about a rule error saying that keep state is not valid for Block rules.It seems like you can fix it by changing keep state to none.
It seems like the logical thing to do for a Block rule when saved is to automatically remove any state setting right when the rule is saved.
-
I had similar problems updating to the 16/06/2013 image. I got notification about keep state not valid for a block rule.
I did not change or edit any rules, i just did a simple update from the web interface.
The problem was for block rules with logging option (I think). This problem made the box unable to do it's job (it had error trying to import the rules in /tmp/rule.debug (or similar file, I'm not sure to remember the exact name), and i guess the rules after the problematic ones were not loaded …I had no time for debugging, so i just reverted to an old image (13/06/2013).
I can reproduce it if needed, as it runs on a vm esxi.
I run on the i386 images.
-
A fix was just committed, either wait for the next new snap, or gitsync to RELENG_2_1
-
Thanks.
For the record I think this was also the cause of a pfsync filter sync error to the secondary because of the bad rule that got synced. That went away after I fixed the rule.
[ An error code was received while attempting Filter sync with username admin https://10.x.x.2:443 - Code 2: Invalid return payload: enable debugging to examine incoming payload]
-
Yes that was probably from a failed filter reload on the slave. Once the fix is on both units they should be OK.