PfSense (1.2) is dead - long live pfSense (2.3.3p1) - configuration help request



  • Hi All,

    I've just suffered the horrible death of our long-lived 1.2 pfSense server.  The issue was the physical destruction of the system drive (and computer in a move).

    Since we're putting together a new box, I decided to update the 2.3.3p1 64bit.  The new box has 2 Intel 1GbE and 1 RealTek 1GbE NICs.  My deployment requirement is a WAN, LAN, and DMZ (OPT1) configuration.

    For my WAN, I have a static IP.  For our internal LAN, we are using the 10.0.0.0/8 private net.  The CIDR block provided for the DMZ is a standard .0/28 segment that is riding on our fixed IP from our provider.

    This started out as a generally straightforward task until I discovered that our ex-network admin had been writing the config backups onto a separate partition on the system drive instead of another system.  C'est le Vie and onward I move…

    In setting up, I assigned the WAN to the re0 interface and assigned the gateway and DNS servers defined by Cox.  I then set up the LAN on em0 with the 10.0.0.0/8 configuration leaving the gateway blank.  I set up the 12 systems that have static addresses on the DHCP server settings and assigned permanent ARP for each.

    Problem 1: For NAT to email, I've assigned the various SMTP/SMPTS/IMAP/IMAPS to point at our mail server running at 10.0.0.2 from WAN *.  According to the NAT Outbound panel, the reverse rules were also created for LAN to WAN.  However, nothing on the LAN can push out on any of the normal email ports.  Tracing the path shows packets being dropped at the firewall.  However, if I log into the pfSense system and then "telnet mail.somedomain.com 25", I get the normal ESMTP header.  Something's not sane in NAT land from what I can see.

    Problem 2: I assigned the OPT1/CIDR to em1.  The block is .160.0/28, so I assigned 160.1 as the gateway, 160.2 as the pfSense interface static address and then the 4 additional systems as 160.3/4/5/6.  After restarting everything, my DMZ machines all get their proper addresses via their separate switch, but none of them can see the 160.1 gateway or beyond.

    Any help with either or both of these is greatly appreciated.  I've ordered the updated pfSense Definitive Guide, but it won't be here until Friday …

    Thanks,
    Tim



  • @tolistim:

    For our internal LAN, we are using the 10.0.0.0/8 private net.

    All 16 million addresses available in /8?
    Hope you separate it reasonably.

    @tolistim:

    According to the NAT Outbound panel, the reverse rules were also created for LAN to WAN.

    Unless you fiddled with it there's no need for LAN -> WAN rules. A * is already in place to get you started. And outbound NAT is on automatic.

    @tolistim:

    .. my DMZ machines … none of them can see the 160.1 gateway or beyond.

    Did you enable the interface and assigned an IP with netmask to it?
    Created rules for that interface to access anything?



  • Hi Chris,

    Yes, thanks - As to the /8 net, we have a very large VM test environment that we manage in a spec lab and they can get a bit "zealous" with their address grabbing.  So much so that a separate 192.168 wouldn't work.  This just makes it easier to segregate them from the rest of us.

    As for the LAN -> WAN, that's what I thought and that's what worked in 1.2.  However, I can't get any LAN system to telnet to any mail server on 25/465 - the packets are dropped at the LAN to WAN interface.  However, as mentioned, I CAN telnet from the pfSense system itself.  All other ports are fine (web, DNS, etc.) and we get mail just fine.

    Finally, on the CIDR, I am totally confused since I can ping all of the assigned systems from the pfSense system, but I can't see out on the WAN from any of them.  Here's my setup particulars:



  • Show your rules for LAN and Opt1



  • The automatic nat rules works only outbound interfaces that has a gateway assigned. Check the outbound nat to see if you need an hibrid, manual or automatic nat to be able to route traffic for your giant virtual environment.

    Ps: This was the oldest pfSense in production with version 1.2 I have seen :)



  • @marcelloc:

    Ps: This was the oldest pfSense in production with version 1.2 I have seen :)

    remember the user posts of "device uptime" on the m0n0wall site?
    I guess there are still some of those installs active in the field…



  • @marcelloc:

    The automatic nat rules works only outbound interfaces that has a gateway assigned. Check the outbound nat to see if you need an hibrid, manual or automatic nat to be able to route traffic for your giant virtual environment.

    That's what I was suspicious of, but other outbound works (web, custom port monitoring connections, etc.).  It's only 25/465 that is being blocked that I can see.  I'll add a custom outbound and see where that leads.

    Ps: This was the oldest pfSense in production with version 1.2 I have seen :)

    We are strong proponents of "if it ain't broke, don't fix it."  Now, it's broke :(  It was running on a PIII 733MHz system with 384MB of RAM, installed on a 9GB pSCSI 40MHz drive  :o.



  • @jahonix:

    Show your rules for LAN and Opt1




  • @tolistim:

    Finally, on the CIDR, I am totally confused since I can ping all of the assigned systems from the pfSense system, but I can't see out on the WAN from any of them.

    Without any pass rules on your CIDR_24 interface this is expected behaviour.



  • Ok, so I'm totally messing this up and misunderstanding the rule requirements  :-.

    The new pfSense Definitive book arrives today, so it's a good time to update to 2.3.4 and reset things.

    Once I have more meaningful questions, I'll revisit this.


Log in to reply