NFS Server WAN - mount within opnsense(pfsense) LAN
-
Hello guys,
completely new opnsense/ maybe soon pfsense user here. I've read that opnsense and pfsense should be pretty similar. Currently opnsense is on my VM but I can't get a simple NFS mount to work. I don't know if this is a opnsense bug or whatever. Before I switch over to pfsense and start everything from scratch. Can someone confirm that this is working with pfsense?Some details from the current opnssense setup:
I'm running a proxmox cluster with two nodes. On each node, I'm trying to setup a VM with the latest opnsense v22.7.8. I read through all the nfs threads in opnsense forum, in the pfsense forum and everywhere else. I simply can't get a VM within opnsense LAN to mount a nfs share. Before I moved this VM into the opnsense lan network, it was running under a normal linux bridge and the nfs mount was working. As soon as I replace the old vmbr0 linux bridge with the new vmbr11 (opnsense VM NET), I can't mount the nfs share anymore. Internet is working, pinging is working. The linux "ls -la" command for the mounted nfs share just shows:d?????????? ? ? ? ? ?
The bridge on the proxmox node is configured like this:
auto vmbr10 iface vmbr10 inet static address 10.10.10.0/31 bridge-ports none bridge-stp off bridge-fd 0 post-up iptables -t nat -A POSTROUTING -s '10.10.10.1/31' -o enp0s31f6 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.10.1/31' -o enp0s31f6 -j MASQUERADE
The NFS server, is a separate dedicated server with its own public IP. It is behind a firewall, too. I allowed all ports, all protocols, simply everything for the two Proxmox nodes.
I'm working here on the second Proxmox node, it has its own public IP. On the node, I'm redirecting all traffic to opnsense except a few tcp and udp ports needed for proxmox and ssh.
post-up iptables -t nat -A PREROUTING -i enp0s31f6 -p tcp -m multiport ! --dport 22,8006,179 -j DNAT --to 10.10.10.1 post-up iptables -t nat -A PREROUTING -i enp0s31f6 -p udp -m multiport ! --dport 5405:5412,4789 -j DNAT --to 10.10.10.1
I would say, all is working so far, but can't get this nfs mount to work.
In opnsense I unchecked "Block private networks" and "Block bogon networks" for WAN and LAN.
I can ping the nfs server from the node itself, from the opnsense diagnostic ping tool (from WAN and LAN) and from the VM itself. When I try to manually mount, I get
mount.nfs4: Operation not permitted
I've also done a packet capture, it shows two errors, for WAN and LAN the same:
ethertype IPv4 (0x0800), length 186: (tos 0x0, ttl 60, id 53543, offset 0, flags [DF], proto TCP (6), length 172) 136.XX.XX.XX.2049 > 10.10.10.1.18318: Flags [P.], cksum 0xe850 (correct), seq 629:749, ack 1297, win 501, options [nop,nop,TS val 3529236269 ecr 2468818434], length 120: NFS reply xid 961210637 reply ok 116 getattr ERROR: Operation not permitted
For auto-mounting I'm using "autofs" and the options "-fstype=nfs4,rw,retry=0". For the manual test mount, I'm also using nfs4.
I've also experimented with "Firewall" -> "Settings" -> "Normalisation", checked and unchecked "Disable interface scrub", "IP Do-Not-Fragment" and "IP Random id".
And finally, I added opnsense firewall rules like:
- "PASS IN IPv4 * * * 136.XX.XX.XX * * *"
- "PASS IN IPv4 * 136.XX.XX.XX * * * * *"
- "PASS OUT IPv4 * 136.XX.XX.XX * * * * *"
So basically allowing everything IN with destination 136.XX.XX.XX as well as allow everything IN and OUT with source 136.XX.XX.XX.
But somehow something in opnsense still don't let me mount the nfs share.
Am I missing something, or is this simply a strange misbehaviour from opnsense and this would work with pfsense? -
I've setup pfsense 2.6.0 with the same settings except the IP addresses. Subnet bit 31 wasn't accepted by pfsense, therefore I used "30"
address 10.10.10.1/30
and "10.10.10.2" for the WAN IP.
Unfortunately it behaves exactly the same. I even found the settings in "System" -> Advanced -> Firewall & NAT and checked them:
- "IP Do-Not-Fragment compatibility": "Clear invalid DF bits instead of dropping the packets"
- "Disable Firewall Scrub": "Disables the PF scrubbing option which can sometimes interfere with NFS traffic."
Can I do something else? Is nobody here mounting nfs inside the LAN net?
-
Have you checked out this link: http://khelearning168.blogspot.com/p/blog-page_24.html? It steps through the requirements to configure an NFS client on FreeBSD or pfSense. The NFS client is not automatically available in pfSense (and likely not in OPNsense either). You must properly configure it as shown in the link. I suspect your mount attempt is failing not because of firewall rules but perhaps by the client not being properly configured on the firewall.
-
You want to mount NFS on a server running behind the FW on LAN?
And it should be accessible from WAN?
-
@bmeeks I've even checked out this article already during my search for a solution but I'm not trying to setup the nfs on freebsd or pfsense. It's like @Cool_Corona said, I'm trying to mount nfs in a VM which is running behind pfsene, in its LAN network (vmbr11).
Sorry I don't know what you mean with: "and it should be accessible from WAN".
The NFS server is a separate dedicated server with its own public IP. The nfs clients are just VM's in a Proxmox cluster. When switching the network device to vmbr11 the VM is running behind pfsense. The nfs server contains e.g. some configurations files and the VM itself has to have access to it. For example one VM has a dockerized traefik proxy, the conf files are on the nfs server.
So each VM itself is going from pfsense LAN -> WAN -> enp0s31f6 -> INTERNET -> 136.XX.XX.XX (NFS Server) -
@leonidas-o said in NFS Server WAN - mount within opnsense(pfsense) LAN:
I'm trying to mount nfs in a VM which is running behind pfsene, in its LAN network (vmbr11).
Oh, sorry, I misread/misunderstood your post.
-
@bmeeks said in NFS Server WAN - mount within opnsense(pfsense) LAN:
@leonidas-o said in NFS Server WAN - mount within opnsense(pfsense) LAN:
I'm trying to mount nfs in a VM which is running behind pfsene, in its LAN network (vmbr11).
Oh, sorry, I misread/misunderstood your post.
never mind, thanks anyway for trying to help Wouldn't have thought that mounting an nfs share behind pfsense is such a big thing.
-
@leonidas-o Are your interface from the client to the NFS server going via the virtual pfsense or on a separate hardware switch??
-
@cool_corona the main proxmox server itself has only one hardware Nic, the nfs server has also only one hardware nic. So Pfsense is operating via virtual bridges only.
-
@leonidas-o Then youre running VLAN's I presume?
-
@cool_corona no VLAN's, really just one WAN (vmbr10) and one LAN (vmbr11). And for now I just put one VM into LAN (assigned vmbr11 to it).
Don't know If I will use VLAN's at all, maybe Proxmox EVPN (BGP) + VXLAN, will see but for now, it's just this simple setup. -
@leonidas-o And both client and NFS is connected to LAN NIC?
-
The NFS server is a separate dedicated server with its own public IP. The nfs clients are just VM's in a Proxmox cluster. When switching the network device to vmbr11 the VM is running behind pfsense. The nfs server contains e.g. some configurations files and the VM itself has to have access to it. For example one VM has a dockerized traefik proxy, the conf files are on the nfs server.
So each VM itself is going from pfsense LAN -> WAN -> enp0s31f6 -> INTERNET -> 136.XX.XX.XX (NFS Server)@cool_corona No, like already said, the nfs server is a separate dedicated server. This server is from the same server provider, but it is even in another Datacenter. So basically I'm going all the way out LAN -> WAN -> enp0s31f6 the hardware nic of the main server over the providers infrastructure (switches etc.) to the nfs server itself (here the last entry in the tracereoute results)
Traceroute:
1 10.10.10.1 0.258 ms 0.263 ms 0.254 ms 2 94.XX.XX.XX 0.542 ms 0.571 ms 0.531 ms 3 213.XX.XX.XX 3.984 ms 0.800 ms 0.865 ms 4 213.XX.XX.XX 0.563 ms 0.582 ms 0.566 ms 5 136.XX.XX.XX 0.785 ms !X 0.656 ms !X 0.821 ms !X
-
@leonidas-o Try an outbound TCP/UDP rule from specific client to NFS server public IP.
-
@cool_corona unfortunately nothing changed, same behaviour, same error when using the following outbound rule:
-
@leonidas-o Keep the source IP and any to the rest and change the LAN to WAN on the interface
-
@cool_corona no, still not working. I tried what you said, I also tried actually every combination which came to my mind.
WAN/ LAN, keeping the source, removing it, setting destination/ removing it, changing the translation address from "Interface Address" to the public IP of the proxmox server.It's crazy how I can't/ or I don't know how, view any logs with some proper messages. The firewall system logs are not showing anything blocking.
-
I've set the "Static Port" checkbox and immediately it worked. The mount was successful and I could see the files.
What does this checkbox really mean or better to say, what consequences do I have when it is checked?
-
Here is a long thread about NFS and pfSense. It mentions that both NAT ports and the pfSense IP figure into the equation: https://www.experts-exchange.com/questions/29118882/nfsv4-mount-fails-with-operation-not-permitted.html. I found this link by specifically searching for nfs getattr operation not permitted as that is the error you saw in your packet capture.
The "operation not permitted" error says to me it is a server-side permissions issue of some sort. Further evidence it may be a server-side issue can be found here: https://serverfault.com/questions/577342/nfsv4-through-portforward. I think the port translations that are happening with the intervening OPNsense or pfSense firewalls are the culprit.
-
@leonidas-o said in NFS Server WAN - mount within opnsense(pfsense) LAN:
I've set the "Static Port" checkbox and immediately it worked. The mount was successful and I could see the files.
What does this checkbox really mean or better to say, what consequences do I have when it is checked?
Read the last link I posted in a reply above. I think it answers your question. pfSense is altering the port number as part of NAT, and that trips up NFS on the server side. Clicking Static Port says "don't mess with the source port". You could also try unchecking "Static Port" and trying the
insecure
option on the export setting on the server side to see if that works as well.