Is it possible to run pfSense behind router -> switch on VMware ESXI?
Hi Guys, any one an idea how to solve my problem?
I have succesfull pfSense installed on my VMware ESXI box.
Sadly I think my set-up in the DC is wrecking my idea to have the pfSense virtualized on each ESXi box.
Situation in DC; Cisco router [no control, DC assigned /24 subnet in a vlan] -> Switch: HP Procurve 1810G-24
DC provide me with a network cable which I plugged into a port of my switch, and everything works perfectly fine so far.
What I would like to archieve is having pfsense on each esxi box to block certain countries, use snort and have VPN access.
This is a clean install with latest version of pfSense, open-vm-tools and a clean install of VMware esxi 5.5 patch to latest level.
It works all fine except it only blocks traffic to the IP on which pfSense is installed all other IPs on the WAN are not protected by the pfsense. No traffic is blocked on WAN to any other IP as the pfsense appliance.
After installing pfsense and assign the LAN IP with dhcp range I was perfectly able to install a new Debian installation on the LAN.
The LAN works fine, dhcp server works fine, I installed a VM with Debian IP assigned from the dhcp server and can browse and login the Dashboard. I noticed that the firewall logs only show block/rejects to the pfSense IP xxx.xxx.xxx.38
I was able to install a Centos VM on the WAN and install extra software with yum and ssh or ping to this other ip address xxx.xxx.xxx.35. I was in the understanding that with current rules no traffic would be allowed on the WAN on this esxi box?
I have attached screen-shots of the current set-up, any help is much appreciated.
Edit: I have tried the promiscuous mode as well, alhough it is not really recommend in the book! but no result either.
pfSense can't help you if you're not routing through it. If you want to protect all those other servers, move them to LAN so they're "behind" pfSense and then port-forward their services.
I would prefer to use the LAN for VPN and IPMI, perhaps moving the VMs to DMZ and use Bridge/Transparent mode would work?
I have a full /24 subnet and would need to be able to move VMs between esxi hosts and keep their IPs been assigned to them.
Not sure if that would be the best way forward as looking around on the forum, many have trouble with BridgeTransparent mode.
I hope someone could give advise in what is good practice with many [public] IPs as all my VMs have a public IPs at the moment.
Best practice is DMZ with port-forwards. Create IP Aliases for each of your public IPs. Create NAT/firewall rules that map each IP alias to a LAN server.
Thanks, will give that a try.
Check these out if you haven't already
Just some feedback,
I had that working fine, but I really don't want to change all the servers IPs, I have decided to go for Transparent mode [Bridge] In that way I only have to move the servers from the DMZ to WAN port back in case the firewall is down for whatever reason. Servers/service can keep their current IPs and the 'design' is way much simpler. Thanks very much for putting me in the right direction. :)