ICMP and Policy Routing on a MultiWAN



  • I have two independent WAN connections with multiple LANs that need NAT to each of the WAN connections. Example: production servers only to the WAN1 and client traffic only out WAN2.

    –------------
    Prod server LAN  -->|  pfsense  | --> WAN1
    Client Traffic LAN -->|      NAT    | --> WAN2
                                  --------------

    I have read through https://doc.pfsense.org/index.php/Multi-WAN#Policy_Route_Negation and set up the NAT rules pointing to each of the corresponding interfaces. I have also created the policy routing rules on the LAN interfaces to forward traffic to the respective gateways. When I have a host on the prod server LAN the traffic (TCP/UDP) goes out the WAN1 and when I have the host on the client LAN the traffic goes out WAN2, as expected. However, when I use ping (ICMP) the traffic will only exit the gateway that is marked the default gateway in the routing tab.

    Why does ICMP follow a different path and not abide by policy routing? also I have explicitly made a rule for ICMP and it does not fix it. I've also cleared states... Is this a bug or expected behavior with ICMP, please elaborate. I was hoping to condense the number of firewalls.


  • Netgate

    It doesn't. ICMP works the same way. What are you pinging?

    You might want to check that your policy routing rules are not TCP/UDP or something. Ping is ICMP (which is neither TCP nor UDP) and will not be policy-routed by a TCP/UDP rule.



  • I've checked these firewall rules over and over… I have also made an explicit ICMP rule. ICMP should follow the same rule as an any rule. My only guess is the traffic is getting forwarded to the NAT process, buy ICMP is treated different since it does not contain a source and destination port? Also between tests I've reset states, which I should not need to do and tried any way.

    FW rules
    States      Protocol Source Port Destination Port Gateway Queue Schedule Description Actions
    0 /535.44 MiB    IPv4 * 10.37.1.51 * * * Exp_WAN_GW none Pass Traffic to Exp    
    0 /7.17 GiB        IPv4 * 10.37.1.52 * * * DQ_WAN_GW none Pass Traffic to DQ

    Note: to save time I have made a rule for an individual IP x.x.x.x/32, so that is why the IPs look similar.

    NAT
    Interface       Source Source Port Destination Destination Port NAT Address NAT Port Static Port Description Actions
    EXP_WAN 10.37.1.51/32 *                  *                  *         EXPT_WAN address * Test NAT
    DQ_WAN 10.37.1.52/32 *                 *                    *         DQE_WAN address * Test NAT



  • @Derelict:

    It doesn't. ICMP works the same way. What are you pinging?

    You might want to check that your policy routing rules are not TCP/UDP or something. Ping is ICMP (which is neither TCP nor UDP) and will not be policy-routed by a TCP/UDP rule.

    How come you can set mark values on linux ping with '-m'?
    There needs to be more tools for packet mark values.
    Sincerely,
    JC Magras


  • Netgate

    Where are those rules in relation to the other rules for the entire subnet.

    It all matters. You might want to just post screenshots.

    You don't need to be that specific with the outbound NAT. Outbound NAT has no bearing on what routes/interfaces are used. It only details what NAT happens when matching traffic flows that way.



  • Thank you all for the reality check! I am turning up a new firewall and no rules existed except for test rules. Since everyone validated that ICMP is treated the same as TCP/UDP traffic for PF markings (route policies) and the placement of firewall rules matter. I looked at my test rules…

    It turns out I had an ICMP echoreq rule for all interfaces with a destination of any. This rule was there for diagnostic purposes. Changing the destination to "This Firewall", maintained diagnostic purposes and now the route policies are working as expected!

    Thank you