I believe that I have a similar issue with the IPsec tunnel. My setup is as follow:
Pfsense with Dual WAN. They are configured in a Failover group (Tier1 and 2 ). I have static public IP addresses for both of the WANs.
Pfsense with one WAN. It has a static public IP address.
I configured one tunnel in the main office that points towards the branch office from the WAN failover group. On the branch office, I set up 2 tunnels one point towards the main WAN in the main office and the other points towards the secondary WAN.
When the main WAN in the main office goes offline, The tunnel will be recreated with the secondary WAN. But when The main WAN comes back online there will be 2 active tunnels from the main and secondary WAN to the branch office. I don't know if this is an issue with my configuration since there is one tunnel in the main firewall and 2 tunnels in the other firewall, or it is only a limited configuration.
I am in the software and services business and we have begun running into situations where some client host machines only have IPv6 because their ISPs have run out of IPv4 addresses. That means the only way they can reach my servers is via IPv6. There aren't many and they are non-US but they are important.
It's probably time for the industry to switch to an IPv6-first stance (Apple and Google seem to be there already). Given the absence of vigorous competition in my area, the ISPs are putting themselves before their customers. I am betting it's a common theme.
Thanks for the heads-up regarding the lack of fair play by Netflix. It's probably due to the fact that they have restricted distribution rights for content and can't be sure of your location. You could probably work around that with a guest VLAN having no IPv6. Kids are really good at getting and spreading computer viruses. A guest VLAN would help you minimize your risk.
I am going to see if I can get the addresses registered in a DNS server on the pfSense and replicate to my Windows AD Server. If I write some code that turns out to be useful I'll put it on GitHub and share a link here.
Yeah, there are several avenues to deal with the IPv6 and Netflix thing, but the kids are only here rarely and I have plenty of IDS/IPS protections for critical stuff. Also, it's only a home network. There are no national defense secrets, Democratic National Committee emails, or documents relating to secret payoffs to porn stars stored here ... LOL.
And yes, Netflix blocks HE IPv6 blocks for precisely the reason you stated: users without strict morals use those to get around geoip blocks that Netflix has in place to enforce their distribution contracts with content owners.
I wish all the ISPs of the world would just start supporting IPv6. Unfortunately that appears to be a very slow process. Even some of those that are supporting it are doing so in strange ways. They seem to be doing their darndest to avoid giving out static IPv6 addresses, for instance.
The benefit is that you don't need to use port forwarding at all and you only need to have one port open. You can have HAproxy listen on the WAN on port 443 and send requests to the appropriate backend server based on the requested URL.
You don't have to remember what port the services are running on externally just the FQDN.
It isn't necessarily any more secure though. You only have one firewall rule on WAN so you can't apply different rules to each service at the firewall level. Connection limiting, traffic shaping etc.
You still can have HAprxy listen on different ports though if you found you needed that.
@jimp On my installation, the default route is set to the cable modem static gateway, so no gateway group issue in play for me.
"Link#6" is igb5 which is my OPT4 interface going to the DSL modem. The modem has an IP of 192.168.254.254/24 (it's from Windstream), the pfSense box has .1, which we set statically.
"Link #2" is igb1 which is the static IP (/30) from Spectrum / Time Warner Business.
Destination Gateway Flags Netif Expire
default 70.60.x.y UGS igb1
184.108.40.206 192.168.254.254 UGHS igb5
220.127.116.11 70.60.x.y UGHS igb1
18.104.22.168 192.168.254.254 UGHS igb5
22.214.171.124 70.60.x.y UGHS igb1
126.96.36.199 192.168.254.254 UGHS igb5
188.8.131.52 70.60.x.y UGHS igb1
184.108.40.206 70.60.x.y UGHS igb1
10.17.0.0/24 link#1 U igb0
10.17.0.1 link#1 UHS lo0
10.254.254.0/24 10.254.254.2 UGS ovpns1
10.254.254.1 link#11 UHS lo0
10.254.254.2 link#11 UH ovpns1
70.60.x.w/30 link#2 U igb1
70.60.x.z link#2 UHS lo0
127.0.0.1 link#8 UH lo0
220.127.116.11 192.168.254.254 UGHS igb5
192.168.254.0/24 link#6 U igb5
192.168.254.1 link#6 UHS lo0
ping -c 4 192.168.254.254
PING 192.168.254.254 (192.168.254.254): 56 data bytes
64 bytes from 192.168.254.254: icmp_seq=0 ttl=64 time=1.170 ms
64 bytes from 192.168.254.254: icmp_seq=1 ttl=64 time=1.071 ms
64 bytes from 192.168.254.254: icmp_seq=2 ttl=64 time=1.084 ms
64 bytes from 192.168.254.254: icmp_seq=3 ttl=64 time=1.083 ms
--- 192.168.254.254 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.071/1.102/1.170/0.040 ms
arp -i igb5 -a
? (192.168.254.254) at 4c:17:eb:21:26:09 on igb5 expires in 1178 seconds [ethernet]
? (192.168.254.1) at 00:08:a2:09:5a:15 on igb5 permanent [ethernet]
traceroute -I -i igb5 checkip.dyndns.com
traceroute: Warning: checkip.dyndns.com has multiple addresses; using 18.104.22.168
traceroute to checkip.dyndns.com (22.214.171.124), 64 hops max, 48 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
ping -c 4 -S 192.168.254.1 checkip.dyndns.com
PING checkip.dyndns.com (126.96.36.199) from 192.168.254.1: 56 data bytes
--- checkip.dyndns.com ping statistics ---
4 packets transmitted, 0 packets received, 100.0% packet loss
[This seems like it might be some of the problem...]
It seems like I can't get a traceroute (ICMP) through the crappy DSL modem. However, apinger is not complaining about the connection being down, and it is set to ping 188.8.131.52 from that interface (instead of using the gateway IP). So maybe there's an apinger bug in play here and my connection is actually down but not correctly showing it?
It would be immensely more helpful if BSD ping could be forced to send from a specific interface... I'm wondering if ping -S <int_ip> is trying to send the traffic out the wrong interface somehow and is maybe a red herring?
So let's play some more - adding a static route to 184.108.40.206 (which is one of the IPs for checkip.dyndns.com) to force it through the DSL gateway:
route add -host 220.127.116.11 192.168.254.254
add host 18.104.22.168: gateway 192.168.254.254
PING 22.214.171.124 (126.96.36.199): 56 data bytes
64 bytes from 188.8.131.52: icmp_seq=0 ttl=49 time=149.241 ms
64 bytes from 184.108.40.206: icmp_seq=1 ttl=49 time=150.749 ms
64 bytes from 220.127.116.11: icmp_seq=2 ttl=49 time=151.432 ms
--- 18.104.22.168 ping statistics ---
3 packets transmitted, 3 packets received, 0.0% packet loss
traceroute -I 22.214.171.124
traceroute to 126.96.36.199 (188.8.131.52), 64 hops max, 48 byte packets
1 192.168.254.254 (192.168.254.254) 1.165 ms 0.912 ms 0.930 ms
2 h184.108.40.206.ip.windstream.net (220.127.116.11) 20.753 ms 20.375 ms 19.908 ms
3 ae2-0.agr03.hdsn01-oh.us.windstream.net (18.104.22.168) 21.559 ms 22.031 ms 21.936 ms
4 et9-0-0-0.cr01.cley01-oh.us.windstream.net (22.214.171.124) 24.981 ms 20.902 ms 24.131 ms
5 et11-0-0-0.cr01.chcg01-il.us.windstream.net (126.96.36.199) 27.746 ms 30.224 ms 30.609 ms
6 chi-b21-link.telia.net (188.8.131.52) 30.788 ms 32.021 ms 28.424 ms
7 nyk-bb3-link.telia.net (184.108.40.206) 147.362 ms 149.769 ms 147.627 ms
8 ldn-bb3-link.telia.net (220.127.116.11) 144.829 ms 144.190 ms 144.030 ms
9 hbg-bb1-link.telia.net (18.104.22.168) 140.047 ms 132.839 ms 130.322 ms
10 war-b1-link.telia.net (22.214.171.124) 145.479 ms 146.117 ms 145.660 ms
11 dnsnet-ic-320436-war-b1.c.telia.net (126.96.36.199) 151.281 ms 147.401 ms 151.799 ms
12 checkip.dyndns.com (188.8.131.52) 150.409 ms 152.881 ms 151.245 ms
curl -v --interface igb5 http://184.108.40.206
* Rebuilt URL to: http://220.127.116.11/
* Trying 18.104.22.168...
* TCP_NODELAY set
* Local Interface igb5 is ip 192.168.254.1 using address family 2
* Local port: 0
* Connected to 22.214.171.124 (126.96.36.199) port 80 (#0)
> GET / HTTP/1.1
> Host: 188.8.131.52
> User-Agent: curl/7.61.1
> Accept: */*
< HTTP/1.1 200 OK
< Content-Type: text/html
< Server: DynDNS-CheckIP/1.0.1
< Connection: close
< Cache-Control: no-cache
< Pragma: no-cache
< Content-Length: 104
<html><head><title>Current IP Check</title></head><body>Current IP Address: 75.90.aaa.bbb</body></html>
* Closing connection 0
[!! WORKS !!]
Something really weird is going on here. For whatever reason the traffic is not correctly egressing through the specified interface when using curl --interface, and it's only going the way we want if I manually add a static route. I'm not exactly sure how the PHP code is hitting checkip.dyndns.com directly via a given interface, but something has changed behavior-wise that is making this fail. (Guessing maybe it's an underlying OS thing at this point, but I suppose it could also be something with curl if that is just being called via PHP?)