IPSEC messages and behavior has me confused
-
I have been using an IPSEC IkeV2 connection between my local site and Azure for well over a year without trouble. I added a second Azure Gateway in a different Azure Region, set it up and it's behaving very strangely. I removed the old gateway and it's corresponding IPSEC entries in pfSense as well. The new connection is not working well. Here are some observations.
1: Everything seems to connect normally, no obvious issues getting connected.
2: When pinging through the tunnel to a remote machine, it sometimes works.
3: Remote Desktop sessions work through signon and then hangs when painting the screen after login. After a minute or so, RDP reports that it's trying to reconnect, does so, and the screen finished painting. That process repeats indefinitely.
4: I have two other IkeV2 tunnels to other sites that are working perfectly (pfSense to pfSense)
5: The following block of log entries repeat with high volume in the log. Please note that con4000 is the Azure tunnel. 192.168.122.0/24 has nothing to do with con4000, it's associated with one of the other tunnels. Lastly, unacceptable and TS_UNACCEPT seem like red flags. Note that I am not an IPSEC/IkeV2 guru, just someone trying to use it.
Jan 29 10:57:12 charon 13[ENC] <con4000|1> parsed CREATE_CHILD_SA request 1198 [ SA No TSi TSr ]
Jan 29 10:57:12 charon 13[NET] <con4000|1> received packet: from 70.37.75.15[4500] to 24.255.32.17[4500] (396 bytes)
Jan 29 10:57:12 charon 13[NET] <con4000|1> sending packet: from 24.255.32.17[4500] to 70.37.75.15[4500] (76 bytes)
Jan 29 10:57:12 charon 13[ENC] <con4000|1> generating CREATE_CHILD_SA response 1197 [ N(TS_UNACCEPT) ]
Jan 29 10:57:12 charon 13[IKE] <con4000|1> failed to establish CHILD_SA, keeping IKE_SA
Jan 29 10:57:12 charon 13[IKE] <con4000|1> traffic selectors 192.168.122.0/24|/0 === 10.54.0.0/16|/0 unacceptable
Jan 29 10:57:12 charon 13[CFG] <con4000|1> looking for a child config for 192.168.122.0/24|/0 === 10.54.0.0/16|/06: I am burning through network bandwidth. I disabled the connection to avoid blowing the budget big-time.
So, am I looking at a corrupted configuration? If so, how can I correct the issue? Hopefully the answer will not be to reconfigure the router from scratch.
Any advise you can give would be greatly appreciated.
-
From the small portion of the logs you posted it looks like the other side is requesting you bring up traffic selectors 192.168.122.0/24|/0 === 10.54.0.0/16|/0 and it, based on what you have stated, is correctly denying it.
-
@derelict How would I bring up a traffic selector if I wanted to relay traffic from 10.54.0.0/16 (on azure) to 192.168.122.0/24 which, is at the end of one of the other IkeV2 tunnels?
-
If you have two traffic selectors that match on two different tunnels it is up to each side to determine which one to use.
The pfSense side generally uses the one that was established first.
Both P1s would need to have identical P2s on them.
Unsure what Azure would do in that case.
-
@derelict True that, Azure is pretty opaque in that regard. I spent a lot of time trying to collect their Gateway logs. You can get stats, metrics, connection summary (up/down), ping, trace and next hop; no detailed connect-time logs.
I don't want to set up an environment where I am unsure how it works.
So I will remove 192.168.122.0/24 from the local network definition file on Azure and see what happens. Azure should stop asking. If that works then I'll add new P1/P2 connections from all my off-site pfSense routers to Azure. That will require us to maintain more connections and keys, something I was hoping to avoid. On the positive side, it'll save a small amount of traffic.
Will post again after that's been tried. Thanks for the insight.
I just noticed that I forgot to post the promised update. I eventually resolved my issue by fixing my P2 entries.
Since then I have stopped using Azure routers and provisioned my own Azure VM running pfSense. Initial configuration on Azure was a challenge, primarily getting the Azure Network to use the pfSense as the default gateway instead of Azure's default gateway. Once I figured that out, I was in full control. pfSense to pfSense...works like a charm. Azures VNet-to-VNet peering doesn't work in this configuration. My current setup requires one pfSense VM in each Azure Virtual Network. I suspect I could figure that out too but it doesn't seem to be worth the effort. Using pfSense to bridge, route and filter traffic between subnets in an Azure Virtual Network should not be difficult. Just add adapters to the VM.
-
Encountered the same error messages and symptoms. I had misconfigured the PFS on one of the Phase 2 connections. Setting both to the same option resolved the issue.