Looking for a specific pfSense / freeBSD release
I'm having some trouble in a production environment, so I'd like to reproduce the same problems on my test environment in order to troubleshoot. I wasn't hired in my firm yet when installation was done and I can't find the release which is installed in the production environment.
So I'm looking for this release (Live CD/Full installer) :
2.0.1-RELEASE (i386) built on Mon Dec 12 18:24:17 EST 2011 FreeBSD XXX 8.1-RELEASE-p6 FreeBSD 8.1-RELEASE-p6 #1: Mon Dec 12 18:23:46 EST 2011 root@FreeBSD_8.0_pfSense_2.0-snaps.pfsense.org:/usr/obj./usr/pfSensesrc/src/sys/pfSense_SMP.8 i386
But the only 2.0.1 version I was able to find so far in pfSense downloads (Live CD/Full installer) is :
2.0.1-RELEASE (i386) built on Mon Dec 12 17:53:52 EST 2011 FreeBSD XXX 8.1-RELEASE-p6 FreeBSD 8.1-RELEASE-p6 #0: Mon Dec 12 17:53:00 EST 2011 root@FreeBSD_8.0_pfSense_2.0-snaps.pfsense.org:/usr/obj./usr/pfSensesrc/src/sys/pfSense_SMP.8 i386
With this latest release, I wasn't able to reproduce the problems I experienced so far. The only information I got about the release I'm looking for is that some RC release was installed, and then auto updated to finally become the release I'm looking for.
With both releases, Auto Update indicates that I'm on the latest version.
FYI, the main problem I experience is that, in case of "ifconfig <wan if="">down" for example, only the WAN VIPs become master on the backup firewall, not the LAN ones. The sysctl "net.inet.carp.preempt" is although set to 1 on both firewalls.
Can someone help me about this ?
Those are the same release, looks like you're just comparing the live CD kernel and the installed kernel maybe? We didn't build two different releases within 30 minutes of each other (that's impossible, one run takes longer than that), that would just be two different kernels from the same build run. The #0 and #1 are the build number in that build run.
The problem you describe wouldn't be related to that, most commonly that happens when the two can't communicate with each other on LAN.
Thanks for answering.
The release informations I gave were found in the "Status : Dashboard –> System Information --> Version" of both Pfsenses. So they would refer to the same information.
I'll look for an other kernel to boot on for my test environment
Regarding my problem, LAN communication is OK (but with FW rules) and I'm using dedicated NICs and crossover cable for pfsync / XMLRPC Sync.
I'll continue to investigate and if I need more help I'll open a new topic in the appropriate section.
That's probably the timestamp difference between the update file and the iso then I would guess. Regardless, it's the same kernel. Make sure you're not blocking or policy routing the CARP traffic, and verify via packet capture they're getting the multicast between each other.
I'm neither blocking nor policy routing the CARP traffic. The firewalls are getting the multicast between each other.
But the WAN and LAN interfaces are in the same VLAN on the production environment (separate VLANs on the testing environment), and it may involve some problems (http://forum.pfsense.org/index.php/topic,43102.0.html).
Moreover, I use a /32 netmask for my CARP VIPs on both production and testing environments. I'm not sure that it's correct, but on the testing environment I don't have any problem (all VIPs become master on the backup firewall).
That is definitely not correct. CARP VIP masks absolutely must match their parent interface subnet masks.
I'll modify the VIPs masks and isolate the LAN and WAN interfaces in separate VLANs on the production environment and then I'll let you know.
I think it won't be very soon though, because VLANs isolation is a lot of work (at this time there is neither cabling documentation, nor documentation indicating which IP(s) is set on which server etc.)
After modifying the VIPs masks and isolating the LAN and WAN interfaces in separate VLANs, the CARP behaviour is now the expected one.
Since the firewalls are in a production environment, I couldn't test as much as I wanted because I had to reduce service downtime, so I'm not sure which change fixed my issue.