Routed IPsec using if_ipsec VTI interfaces
-
Makes perfect sense to me - as soon as the daily build hits pfSense factory -devel, I'll start testing again!
-
OK, I think I have that nailed down. Apparently it does not get along with pf
route-to
directly on the interface. It works fine for LAN traffic but not traffic exiting from the firewall itself. I pushed a fix, should be in snaps soonish. -
Cool, thanks!
Question - and this might be sacrilege - can I set my Factory boxes to download CE snapshots? I poked around repos but when it looks like just swapping the pfSense.conf one didn't really work out on a test box :)
-
@obrienmd said in Routed IPsec using if_ipsec VTI interfaces:
Cool, thanks!
Question - and this might be sacrilege - can I set my Factory boxes to download CE snapshots? I poked around repos but when it looks like just swapping the pfSense.conf one didn't really work out on a test box :)
Not easily, several things need adjusted and it's just not worth the hassle to downgrade like that in-place. All the changes I made today, including the fix for that
route-to
issue, have been synchronized to Factory so it should show up in snapshots for both CE and Factory by the morning. -
Much appreciated.
This is going to help in a lot of places... Now I just have to get Verizon to terminate mobile private network tunnels as VTI :) Wish me luck...
-
Hrm, with the new devel builds, phase 1s are coming up (and SAs show, with no inbound traffic), and I'm seeing this in ipsec logs, I think the key lines being SADB_ACQUIRE and 'unable to acquire reqid'.
CARP is enabled on WANs of one of the sides, and I'm using the CARP IP for 'Interface' on the P1 locally and 'remote gateway' on the P1 remotely. Might this have something to do with it?
The other pair I've been testing most with is that where one side is factory, and we had some success earlier, but neither side is HA/CARP. I'll be testing again with that one today as soon as factory images come out.
Jun 8 05:18:00 charon 10[CFG] vici client 360 registered for: list-sa Jun 8 05:18:00 charon 10[CFG] vici client 360 requests: list-sas Jun 8 05:18:00 charon 10[CFG] vici client 360 disconnected Jun 8 05:18:00 charon 10[KNL] received an SADB_ACQUIRE with policy id 2 but no matching policy found Jun 8 05:18:00 charon 10[KNL] creating acquire job for policy {local_wan_ip}/32|/0 === {remote_wan_ip}/32|/0 with reqid {0} Jun 8 05:18:00 charon 15[CFG] trap not found, unable to acquire reqid 0 Jun 8 05:18:03 charon 15[KNL] <con2|2> querying policy {local_tunnel_ip}/32|/0 === {remote_tunnel_ip}/32|/0 in failed, not found Jun 8 05:18:03 charon 15[KNL] <con2|2> querying policy {remote_tunnel_ip)/32|/0 === {local_tunnel_ip}/32|/0 in failed, not found Jun 8 05:18:03 charon 15[IKE] <con2|2> sending DPD request Jun 8 05:18:03 charon 15[IKE] <con2|2> queueing IKE_DPD task Jun 8 05:18:03 charon 15[IKE] <con2|2> activating new tasks Jun 8 05:18:03 charon 15[IKE] <con2|2> activating IKE_DPD task Jun 8 05:18:03 charon 15[ENC] <con2|2> generating INFORMATIONAL request 253 [ ] Jun 8 05:18:03 charon 15[NET] <con2|2> sending packet: from {local_ip}[500] to {remote_ip}[500] (76 bytes) Jun 8 05:18:03 charon 10[NET] <con2|2> received packet: from {remote_ip)[500] to {local_ip}[500] (76 bytes) Jun 8 05:18:03 charon 10[ENC] <con2|2> parsed INFORMATIONAL response 253 [ ] Jun 8 05:18:03 charon 10[IKE] <con2|2> activating new tasks Jun 8 05:18:03 charon 10[IKE] <con2|2> nothing to initiate
-
Granted I haven't tried it on an HA pair but nothing has changed in a couple days that would affect that. When was the last snapshot you had working there?
Can you show the
conXXXX
entry from/var/etc/ipsec/ipsec.conf
for that tunnel? -
Non-HA side:
conn con2 fragmentation = yes keyexchange = ikev2 reauth = yes forceencaps = no mobike = no rekey = yes installpolicy = no dpdaction = restart dpddelay = 10s dpdtimeout = 60s auto = start left = {non_ha_side_wan_ip} right = {ha_side_wan_ip} leftid = {non_ha_side_wan_ip} ikelifetime = 28800s lifetime = 3600s ike = aes256-sha1-modp1024! esp = aes256gcm128-sha256-modp2048,aes256gcm96-sha256-modp2048,aes256gcm64-sha256-modp2048! leftauth = psk rightauth = psk rightid = {ha_side_wan_ip} rightsubnet = 10.90.91.1 leftsubnet = 10.90.91.2/30
HA side:
conn con1 fragmentation = yes keyexchange = ikev2 reauth = yes forceencaps = no mobike = no rekey = yes installpolicy = no dpdaction = restart dpddelay = 10s dpdtimeout = 60s auto = start left = {ha_side_wan_ip} right = {non_ha_side_wan_ip} leftid = {ha_side_wan_ip} ikelifetime = 28800s lifetime = 3600s ike = aes256-sha1-modp1024! esp = aes256gcm128-sha256-modp2048,aes256gcm96-sha256-modp2048,aes256gcm64-sha256-modp2048! leftauth = psk rightauth = psk rightid = {non_ha_side_wan_ip} rightsubnet = 10.90.91.2 leftsubnet = 10.90.91.1/30
-
Those don't look quite right, there is no
reqid
in those blocks like there should be. Are theipsecX
interfaces actually present?Might be due to the tunnel being IKEv2, I think all three of my test systems here have been IKEv1. I'll try to spin up a v2 set.
-
Yup, ipsec1000/2000 (depending on box) ints are there, and show proper /30s.
-
I did see one problem come up that I just pushed a fix for, but I didn't see that specific error you had unless I had an IKEv1/IKEv2 mismatch between the peers.
The fix I made only touches two lines, you can easily apply it manually to test: https://github.com/pfsense/pfsense/commit/d4b43c48ed1636d3fcd6e47d73ba721bd63d883a
With that I just switched both sides from IKEv1 to IKEv2 and it came right back up.
-
Yep, nailed it. Looking good with that change.
Because of your warning on frr, I'm testing with static routing right now. After everything was fixed and I disabled / re-enabled the interfaces to get traffic flowing, static routes were showing in the route table but set to hn1 rather than the ipsec interface. Editing and re-saving the static route resolved the issue.
With dynamic routing I bet I won't see that in the future, but if there's some resiliency code somewhere to reset interfaces on static routes when gateways disappear/appear, go up/down, go pending, etc... Perhaps something needs to get tweaked there.
-
I need to work a bit on static routes yet. I had it solved and working on reboot but somewhere in my changes this week that appears to have broken again as I am not seeing my routes in the table after it boots up. I need to investigate more and open another issue up for that.
FRR should be better next week, see my updates on https://redmine.pfsense.org/issues/8449#note-2
-
Great, thanks Jim.
-
Is there a simple way to map a devel release, e.g. 2.4.4.a.20180608.1025 for Factory or 2.4.4.a.20180608.0718 for CE, against a git commit? I don't want to assume it will be build using all commits immediately prior to that (and I don't know which time zones these are based on).
-
@obrienmd said in Routed IPsec using if_ipsec VTI interfaces:
Is there a simple way to map a devel release, e.g. 2.4.4.a.20180608.1025 for Factory or 2.4.4.a.20180608.0718 for CE, against a git commit? I don't want to assume it will be build using all commits immediately prior to that (and I don't know which time zones these are based on).
Not without loading it up and seeing what's in
/etc/version.lastcommit
. Servers are using CDT. -
Static routes should be OK now. I'm not quite sure how it worked before, given the changes I had to make, but it's working now.
https://github.com/pfsense/pfsense/commit/0aa52fb21a21f58035f2e2fe3b9328a9c175ffb5
I think that might be most if not all of the functional issues. There are still some anti-foot-shooting measures I need to take like preventing removing an IPsec tunnel or P2 used as a VTI interface.
-
On latest devel for factory and CE, everything functionally is looking great. Had to restart *pinger (I forget which one is used these days) for gateways to get out of pending after initial interface bring-up, but packets are all flowing, no weird state issues, very solid :)
-
@obrienmd said in Routed IPsec using if_ipsec VTI interfaces:
On latest devel for factory and CE, everything functionally is looking great. Had to restart *pinger (I forget which one is used these days) for gateways to get out of pending after initial interface bring-up, but packets are all flowing, no weird state issues, very solid :)
Great! I'll have to check back on the gateways, one of mine is OK and it comes right up, I had disabled gateway monitoring on the other pair because it was interfering with the packet captures I was taking when diagnosing some of the other traffic issues above.