Asymmetric routing with VTI
-
@heper : Yes, 10.9.1.0/29 and 10.9.1.8/29 are non-overlapping.
Like I say, they are working fine. From one side I can ping the other side, and indeed I can route all my site-to-site traffic down both.
The issue is that incoming pings from (say) 10.0.0.1 to the local address 10.9.1.10 (VTI interface) have responses sent via a different interface, because the route to 10.0.0.1 is via the other interface.
What I want is some sort of policy routing such that incoming pings to just the address 10.9.1.10 (VTI) have their responses sent back out via the same interface.
-
VTI doesn't support reply-to - that's why route table is addressed all the time for the backwards traffic. Policy-routing seems not to be available too on a similar reason - PF deals only with ephemeral "IPsec" interface, VTI's self are not "visible" for pf.
I think the only option is to align routing - to configure a route on pfsense to 10.0.0.1 which points to VTI gw. -
If you setup a routing protocol such as BGP then that will make sure that packets and replies take the same link when multiple choices are available. Since that's already in your plans, do that now and your problem will go away.
-
@jimp I have the BGP setup now. It doesn't enforce symmetric routing - for the fairly obvious reason that it can only affect the FIB and does not apply policy routing.
So although both BGP peering sessions are up, whilst localpref is configured to prefer the P2P link, I cannot ping the VTI far-end interface.
brian@ix-mon2:~$ ping 10.9.1.2 PING 10.9.1.2 (10.9.1.2) 56(84) bytes of data. 64 bytes from 10.9.1.2: icmp_seq=1 ttl=63 time=1.88 ms 64 bytes from 10.9.1.2: icmp_seq=2 ttl=63 time=1.63 ms 64 bytes from 10.9.1.2: icmp_seq=3 ttl=63 time=1.47 ms ^C --- 10.9.1.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 1.473/1.661/1.881/0.174 ms brian@ix-mon2:~$ ping 10.9.1.10 PING 10.9.1.10 (10.9.1.10) 56(84) bytes of data. ^C --- 10.9.1.10 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2024ms
If I can't do reply-to on the VTI interface, then I guess there's not much I can do about that. I can try to find if either pfSense or ASA supports a BGP MIB so I can monitor the links that way.
-
Why are you policy routing with a routing protocol in place? Let the routing protocol decide which link to use, then both sides will agree.
-
It seems I'm being very unclear, I apologise.
- Site to site traffic flows normally. That is fine.
- BGP works fine to choose whether to go over link A (point-to-point) or link B (VTI). That is fine.
- I want to be able to monitor the remote interface IPs by pinging them. This is to test that the P2P link is working, and to test that the VTI virtual P2P is working. I want a Nagios alert if either goes down.
This means that from the monitoring box, which is behind the ASA on the right-hand side of the diagram, I want to ping 10.9.1.2 and 10.9.1.10 respectively.
So the pings go:
mon box -> ASA -> LH side -> ASA -> mon box
When both links are up, BGP chooses the P2P (link A) as the preferred site-to-site route; the VTI is there for backup only.
Now, I want to monitor the VTI link. The outgoing ping has source IP (say) 10.0.0.1 and destination IP 10.9.1.10. It goes out of the ASA's VTI interface, into the pfSense VTI interface, and hits it. That all works fine.
Then the pfSense box sends an echo response, with source IP 10.9.1.10 and destination 10.0.0.1. However, because the best route back to the monitoring box is via the P2P link, the return packet takes the P2P link instead of the VTI link.
Therefore the echo reply arrives at the ASA, but arrives on a different interface than the one the echo request went out of (out VTI, in P2P). So the ASA drops it.
The only requirement for policy routing is, on the pfSense box: "if I receive a packet to myself on the VTI interface, I should send the response via the VTI interface". I imagine this is the normal requirement receiving inbound connections in a multi-WAN scenario. But here neither interface is WAN.
Like I say, it's only for monitoring. Normal traffic can just follow the best path without problems. But the whole point of redundancy is that if the best path fails, there is a second-best path to fall back to; so I need a way to test the second-best path to be sure it's ready and working in case it's needed.
I hope this is a bit clearer!
Cheers... Brian.
-
i'm unsure how it works with bgp, i'm only a bit familiar with ospf myself.
but i'm sure it should act similary.with ospf you assign a cost for each connection. lowest cost path wins.
so you set p2p with a cost of 10 & set vti with a cost of 20
you don't set any static routes. (just a gateway for each interface/tunnel)at any given time, pfsense would only have a single route in its routing table towards the 10.0.0.1 subnet. when that is the case, its impossible for an echo to go out the wrong path, because the path is not there.
the only oddness that could happen is that site-A sets the opposite link as the lowest cost compared to site-B ....
-
Sorry, but you're really not getting it :-)
You can always ping an interface, even if it's not chosen by BGP or OSPF as the best path to reach any particular destination.
That's because the interface has its own address, and is a destination in its own right when you ping it.
In this case, there are two point-to-point subnets:
10.9.1.0/29
10.9.1.8/29Both of these are directly-connected interfaces (at the pfSense side and the ASA side). Therefore, both subnets are in the forwarding tables at both sides.
Logged into the ASA, I can ping 10.9.1.2 and I can ping 10.9.1.10. This proves both links are working. However, when I ping 10.9.1.2, the ASA chooses 10.9.1.4 as a source address. When I ping 10.9.1.10, the ASA chooses 10.9.1.12 as a source address. And therefore in this instance the pings come back down the same path, and everything works fine.
But the monitoring box is on 10.0.0.1 behind the ASA. It also tries to ping 10.9.1.2 or 10.9.1.10. In both cases, the echo request arrives.
But in the case where it pings 10.9.1.10 (the VTI link), the echo response comes back via the other link. That's because the best route to 10.0.0.1, learned by pfSense via BGP, is via the other link.
Now, that in itself isn't a problem - asymmetric routing is a fact of life in many IP networks - but the ASA firewall doesn't like it. Although the echo response is delivered back to the ASA, it comes in via the "wrong" interface and is dropped.
-
i see. @vladimirlind said the reply-to in not working with vti, so perhaps you can find a different way to monitor the gateway?
maybe you can get away with something like this:
https://forum.netgate.com/topic/118589/monitor-interface-status-with-snmp-and-nagiosotherwise you'll probably have to migrate to another type of tunnel that doesn't have the reply-to issue
-
@candlerb said in Asymmetric routing with VTI:
But in the case where it pings 10.9.1.10 (the VTI link), the echo response comes back via the other link. That's because the best route to 10.0.0.1, learned by pfSense via BGP, is via the other link.
Hm, if you install a static route (or propagate it via BGP) to 10.0.0.1 with next-hop to VTI gw 10.9.1.12 you will be able to monitor VTI but you will loose this ability regarding p2p monitoring, if I understand correctly. Because in this case icmp requests will go to p2p interface 10.9.1.2 and then replies flow back via VTI - ASA will discard it too.
Probably you can bypass this asymmetry with outbound NAT on ASA - translate icmp packets 10.0.0.1 --> 10.9.1.10 to ASA's VTI address 10.9.1.12 - so pfsense will see icmp requests sourced by 10.9.1.12 and send it back to the proper tunnel. And make the same translation for icmp 10.0.0.1 --> 10.9.1.2 to p2p address 10.9.1.4 (in case route to 10.0.0.1 flips to VTI you should still be able to continue to monitor p2p address 10.9.1.2)
-
Yes, I could give the monitoring box additional IP addresses, with specific static routes, and ensure the probes are sent from the right source. That would involve writing a new plugin since check_ping doesn't support binding to source IP.
As for using NAT: in principle that would work too, but I've been bitten by NAT bugs on the ASA so many times I'm not going to attempt that. It's not even clear if NAT in this instance would apply to the traffic before or after being VTI-encapsulated.
I thought of another option to check out: maybe the ASA supports IPSLA with the IPSLA MIB.
Anyway, thanks for all the suggestions. It seems VTI is almost, but not quite, like a real interface :-)
-
BTW: the ASA doesn't implement either the Cisco BGP MIB (1.3.6.1.4.1.9.9.187) or the standard BGP MIB (1.3.6.1.2.1.15)
-
what about "same-security-traffic permit inter-interface" on ASA? Not a perfect workaround if it works at all...
-
@vladimirlind That's a good suggestion! Unfortunately, the ASA tunnel interface has no security-level setting. I would therefore assume it uses the security-level from the outside interface, and I wouldn't really want to set the same security level on the WAN and the private point-to-point interface.
interface Tunnel1 nameif vti_lch ip address 10.9.1.12 255.255.255.248 standby 10.9.1.13 tunnel source interface outside tunnel destination x.x.x.x tunnel mode ipsec ipv4 tunnel protection ipsec profile VTI-PROFILE1 !security-level is an unknown setting here
UPDATE: I tried setting the P2P link to security-level 0 (matching outside), and it still didn't work. This was with
same-security-traffic permit inter-interface same-security-traffic permit intra-interface
-
Came here to backup @candlerb. We're used to ECMP routing across two VTI tunnels on ASRs and such, but the ASA (due to the asymmetric path check) doesn't allow this.
This seems to be due to the ASA assigning an outbound VTI interface (E.g. VTI1) to the flow state table and mandating that return traffic also return on that external interface, when in realty BGP will load balance return flows to VTI2. It definitely presents a confusing issue at first.
Our way around this is to disable multi-pathing by decreasing outbound MED advertisements and increasing LOCAL_PREF for a designated 'primary' VTI interface.