Thanks for reply , I think I got answer for Policy Based VPN ( like PFSense) that it is not supported on TNSR
What I am looking for is have NAT before IPSec due to overlapping of Private Network , is that possible ? the NAT document gives info related to NAT at interface level but not for what I am looking for ( IPSec ).
Nevermind, figured out after finally finding a few other posts that the bnx2 driver is not supported. I added an additional NIC to the ESXi server that I had, and I was able to turn up a passthrough NIC. If this is your problem, add another NIC.
TNSR does not have named NAT pools, just one nameless global pool per VRF (and, I guess, a "twice-nat" pool per VRF). Unfortunately the capability to choose "no pool at all" was lost along with the capability to choose among pools!
That said I hope you will not emulate Cisco's NAT configuration because it's an incomprehensible disaster, especially when VRF are involved.
Some of these configurations could possibly be achieved more simply than Cisco by a feature to recirculate the packet:
interface looppair natswitch
ip address 22.214.171.124/24
nat pool interface outside0
route ipv4 table ipv4-VRF:0
next-hop 0 via 126.96.36.199 # no NAT
next-hop 0 via 188.8.131.52 loopnexthop10 resolve-via-attached # yes NAT
Combined with a hypothetical policy-routing feature that could consider source IP in routing decisions this could handle having a mix of rfc1918 and globally-routable hosts behind a single "inside" interface. For example, route source IP 10.0.0.0/8 via loopnexthop10, and route source IP 184.108.40.206/24 via outside0 directly.
It creates a problem that 220.127.116.11 is directly attached to outside0 so, if NAT is wanted for that destination address, it's difficult to ignore outside0's automatically-added direct route. The hypothetical policy-routing(*) feature could fix that, too, by considering ingress interface in routing decisions. Ingress interface looprecirculated10 will use the VRF's ordinary table, while inside0 ingress interface will policy override and send 18.104.22.168/24 via loopnexthop10 at higher priority than the directly-attached route.
Recirculation solves most VRF problems with three simple rules:
outside and inside interfaces don't need to be in the same VRF.
to create or freshen a dynamic translation, the packet must enter a 'nat inside' interface and leave a 'nat outside' interface
to match an existing static or dynamic translation, the packet must enter a 'nat outside' interface. It doesn't matter where it exits because that's not known at the time of translation. However the translation may move the packet to a different VRF before routing table lookup if the VRFs differed when the dynamic translation was created.
The only things missing are:
nat static mapping ... route-table outside-vrf local-route-table inside-vrf
(yes, one could add a "via .. next-hop-table" route to <local address> pointing to the other VRF, but this is not general enough. There's no reason <local address> shouldn't still be reachable directly in outside-vrf. The nat static rule should not have to cast a shadow over it. Also there may be 10 inside-vrfs all having local 192.168.1.1 port 22 listening, and we want to use <external ip> port 2220, 2221, ... 2229 to reach each of them.)
ability to bind 'nat pool' to 'nat outside' interface.
(*) Unfortunately if 'via classify <table>' was the syntax planned for policy routing, that syntax may not work intuitively because the directly-attached outside0 subnet route needs to be overriden, not passed through a layer of indirection.
@derelict Our company is moving from a 16bit mask to smaller segments. This is to be done as smoothly as possible But the problem is the old address’s must stay the same but with new subnets ie 10.23.3.1/16 could now be 10.23.3.1/24. This obviously means there will be temporary overlapping subnets
The idea is to use VRFs to help get this done. I’m trying to get VRF leaking to work but somehow the interfaces aren’t communicating.Therefore I thought it would be good for troubleshooting purposes to be able to ping from one VRF to the other using the cli. I know the cli is using the default VRF so it’s probably not possible to ping from another VRF but maybe there’s some kind of cool command.
@vesalius no they do not. RHEL can always be rebuilt from the source code that rhel MUST provide per the GPL. When centos was borged by RHEL the writing was on the wall for Centos. The other rebuilds will continue because the rhel sources will always be available. https://etc-md.com/2020/12/09/the-end-of-centos-and-my-moving-to-bsd/
TNSR provides a RESTCONF API. I'm not aware of anyone that has attempted to integrate or test the API against Openstack/Ansible, but if those tools can make RESTCONFI requests it should be possible for them to configure TNSR.
The TNSR API docs can be found here: https://docs.netgate.com/tnsr/en/latest/api/
Feel free to send more questions or update us on your progress!
The pricing from here is $500 per year for the Business Pro plan. There is no minimum/maximum speed, it's flat rate. It's not a replacement product for pfSense so your network doesn't need to be 'big enough' to benefit from it, you may use it even for the smallest networks if you wanted.
That is the kind of speed I have when one of my side it's not set with Mtu 9000, double-check that on all your machine.
I found this tuning useful for Ubuntu https://fasterdata.es.net/host-tuning/linux/
Netgate provided an example on how to integrate Snort to create an IDS back in 2018, which needs an update as TNSR has continued to evolve. From a 2018 blog:
TNSR-IDS is written in the Go programming language, allowing it to be easily compiled for a large number of OS and architectures. Details, source code, and setup instructions (including TNSR, SNORT and ERSPAN) can be found at the TNSR-IDS Project GitHub Repository(https://github.com/Netgate/TNSR_IDS). A README file is included in the repository that provides a lot of detail about the process, as well as a TNSR-Snort setup file that gives detailed installation instructions.
I'd use that as a starting point, but there may well be some architectual or setting changes that need to be tweaked to get the spice flowing.