Routed IPsec using if_ipsec VTI interfaces
-
Sweet - I'll test as soon as the snapshots show up and let you know how it goes!
I did notice that the official Netgate boxes I'm working with (SG-2440s mostly) seem to "see" snapshots for 2.4.4 later than the community edition installs I'm also testing with.
-
The factory snapshots happen on a different schedule than the CE snapshots so they won't ever be exactly the same. Close, but not the same. Also depends on how things get merged back into factory and if there are any conflicts.
-
Looks like it's better and worse. I can pass traffic between the hosts that failed before, but the gateways are not being generated properly so I need to fix that up, so static routes won't work and so on.
I see the code blocks I didn't update to the new style so I'll fix those up in the morning.
-
Thanks Jim!
-
OK, I just pushed the updated gateway code and it's working well for me now.
I do, however, see the same behavior you did where the firewall can't reach a routed network on the far side using the ipsec interface address as the source. It does work if I set the source to be the LAN, however.
Using the ipsecX interface address as the source:
: ping -S 10.6.106.1 10.7.0.1 PING 10.7.0.1 (10.7.0.1) from 10.6.106.1: 56 data bytes ^C --- 10.7.0.1 ping statistics --- 2 packets transmitted, 0 packets received, 100.0% packet loss
Going LAN to LAN from the firewall:
: ping -S 10.6.0.1 10.7.0.1 PING 10.7.0.1 (10.7.0.1) from 10.6.0.1: 56 data bytes 64 bytes from 10.7.0.1: icmp_seq=0 ttl=64 time=0.802 ms 64 bytes from 10.7.0.1: icmp_seq=1 ttl=64 time=0.883 ms 64 bytes from 10.7.0.1: icmp_seq=2 ttl=64 time=0.716 ms ^C --- 10.7.0.1 ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss
Routes all look correct, on the source node traffic appears on the ipsecX and enc0 interface but the counters on the child SA do not increase and no ESP leaves, so somehow it isn't making its way to that connection. I'll keep poking at it, but it's not the end of the world since the same situation also didn't work on plain IPsec, though we hoped routed IPsec would be a cure for that.
-
Since that firewall-to-LAN routing issue is not a flaw in the VTI code that I can see, I've split that off into https://redmine.pfsense.org/issues/8551
-
Makes perfect sense to me - as soon as the daily build hits pfSense factory -devel, I'll start testing again!
-
OK, I think I have that nailed down. Apparently it does not get along with pf
route-to
directly on the interface. It works fine for LAN traffic but not traffic exiting from the firewall itself. I pushed a fix, should be in snaps soonish. -
Cool, thanks!
Question - and this might be sacrilege - can I set my Factory boxes to download CE snapshots? I poked around repos but when it looks like just swapping the pfSense.conf one didn't really work out on a test box :)
-
@obrienmd said in Routed IPsec using if_ipsec VTI interfaces:
Cool, thanks!
Question - and this might be sacrilege - can I set my Factory boxes to download CE snapshots? I poked around repos but when it looks like just swapping the pfSense.conf one didn't really work out on a test box :)
Not easily, several things need adjusted and it's just not worth the hassle to downgrade like that in-place. All the changes I made today, including the fix for that
route-to
issue, have been synchronized to Factory so it should show up in snapshots for both CE and Factory by the morning. -
Much appreciated.
This is going to help in a lot of places... Now I just have to get Verizon to terminate mobile private network tunnels as VTI :) Wish me luck...
-
Hrm, with the new devel builds, phase 1s are coming up (and SAs show, with no inbound traffic), and I'm seeing this in ipsec logs, I think the key lines being SADB_ACQUIRE and 'unable to acquire reqid'.
CARP is enabled on WANs of one of the sides, and I'm using the CARP IP for 'Interface' on the P1 locally and 'remote gateway' on the P1 remotely. Might this have something to do with it?
The other pair I've been testing most with is that where one side is factory, and we had some success earlier, but neither side is HA/CARP. I'll be testing again with that one today as soon as factory images come out.
Jun 8 05:18:00 charon 10[CFG] vici client 360 registered for: list-sa Jun 8 05:18:00 charon 10[CFG] vici client 360 requests: list-sas Jun 8 05:18:00 charon 10[CFG] vici client 360 disconnected Jun 8 05:18:00 charon 10[KNL] received an SADB_ACQUIRE with policy id 2 but no matching policy found Jun 8 05:18:00 charon 10[KNL] creating acquire job for policy {local_wan_ip}/32|/0 === {remote_wan_ip}/32|/0 with reqid {0} Jun 8 05:18:00 charon 15[CFG] trap not found, unable to acquire reqid 0 Jun 8 05:18:03 charon 15[KNL] <con2|2> querying policy {local_tunnel_ip}/32|/0 === {remote_tunnel_ip}/32|/0 in failed, not found Jun 8 05:18:03 charon 15[KNL] <con2|2> querying policy {remote_tunnel_ip)/32|/0 === {local_tunnel_ip}/32|/0 in failed, not found Jun 8 05:18:03 charon 15[IKE] <con2|2> sending DPD request Jun 8 05:18:03 charon 15[IKE] <con2|2> queueing IKE_DPD task Jun 8 05:18:03 charon 15[IKE] <con2|2> activating new tasks Jun 8 05:18:03 charon 15[IKE] <con2|2> activating IKE_DPD task Jun 8 05:18:03 charon 15[ENC] <con2|2> generating INFORMATIONAL request 253 [ ] Jun 8 05:18:03 charon 15[NET] <con2|2> sending packet: from {local_ip}[500] to {remote_ip}[500] (76 bytes) Jun 8 05:18:03 charon 10[NET] <con2|2> received packet: from {remote_ip)[500] to {local_ip}[500] (76 bytes) Jun 8 05:18:03 charon 10[ENC] <con2|2> parsed INFORMATIONAL response 253 [ ] Jun 8 05:18:03 charon 10[IKE] <con2|2> activating new tasks Jun 8 05:18:03 charon 10[IKE] <con2|2> nothing to initiate
-
Granted I haven't tried it on an HA pair but nothing has changed in a couple days that would affect that. When was the last snapshot you had working there?
Can you show the
conXXXX
entry from/var/etc/ipsec/ipsec.conf
for that tunnel? -
Non-HA side:
conn con2 fragmentation = yes keyexchange = ikev2 reauth = yes forceencaps = no mobike = no rekey = yes installpolicy = no dpdaction = restart dpddelay = 10s dpdtimeout = 60s auto = start left = {non_ha_side_wan_ip} right = {ha_side_wan_ip} leftid = {non_ha_side_wan_ip} ikelifetime = 28800s lifetime = 3600s ike = aes256-sha1-modp1024! esp = aes256gcm128-sha256-modp2048,aes256gcm96-sha256-modp2048,aes256gcm64-sha256-modp2048! leftauth = psk rightauth = psk rightid = {ha_side_wan_ip} rightsubnet = 10.90.91.1 leftsubnet = 10.90.91.2/30
HA side:
conn con1 fragmentation = yes keyexchange = ikev2 reauth = yes forceencaps = no mobike = no rekey = yes installpolicy = no dpdaction = restart dpddelay = 10s dpdtimeout = 60s auto = start left = {ha_side_wan_ip} right = {non_ha_side_wan_ip} leftid = {ha_side_wan_ip} ikelifetime = 28800s lifetime = 3600s ike = aes256-sha1-modp1024! esp = aes256gcm128-sha256-modp2048,aes256gcm96-sha256-modp2048,aes256gcm64-sha256-modp2048! leftauth = psk rightauth = psk rightid = {non_ha_side_wan_ip} rightsubnet = 10.90.91.2 leftsubnet = 10.90.91.1/30
-
Those don't look quite right, there is no
reqid
in those blocks like there should be. Are theipsecX
interfaces actually present?Might be due to the tunnel being IKEv2, I think all three of my test systems here have been IKEv1. I'll try to spin up a v2 set.
-
Yup, ipsec1000/2000 (depending on box) ints are there, and show proper /30s.
-
I did see one problem come up that I just pushed a fix for, but I didn't see that specific error you had unless I had an IKEv1/IKEv2 mismatch between the peers.
The fix I made only touches two lines, you can easily apply it manually to test: https://github.com/pfsense/pfsense/commit/d4b43c48ed1636d3fcd6e47d73ba721bd63d883a
With that I just switched both sides from IKEv1 to IKEv2 and it came right back up.
-
Yep, nailed it. Looking good with that change.
Because of your warning on frr, I'm testing with static routing right now. After everything was fixed and I disabled / re-enabled the interfaces to get traffic flowing, static routes were showing in the route table but set to hn1 rather than the ipsec interface. Editing and re-saving the static route resolved the issue.
With dynamic routing I bet I won't see that in the future, but if there's some resiliency code somewhere to reset interfaces on static routes when gateways disappear/appear, go up/down, go pending, etc... Perhaps something needs to get tweaked there.
-
I need to work a bit on static routes yet. I had it solved and working on reboot but somewhere in my changes this week that appears to have broken again as I am not seeing my routes in the table after it boots up. I need to investigate more and open another issue up for that.
FRR should be better next week, see my updates on https://redmine.pfsense.org/issues/8449#note-2
-
Great, thanks Jim.