1.2.3 to 2.0.1 problem on upgrading to new hardware
I want to upgrade old hardware with 1.2.3 to newer version of pfSense, so got new hardware with 2.0.1 installed on it and now struggling to move the settings from old system to new.
On 1.2.3 took full backup (there is no option to skip RRD, but they dont seem to come along anyways), making the .xml file ~319kb, took it to new system and tried to restore fully from that file.
Things looked fine and pfSense ate the old 1.2.3 config.xml and restarted itself.
Problems started quickly and the reason seems to be unsufficient free space on /tmp mount on this new embedded NanoBSD system (with only 38Mb total space).
I suppose it is not possible to expand this /tmp mount or make it point somewhere else, because of the Flash Card and NanoBSD?
- Wouldn't want to risk upgrading the old firewall firmware to 2.0.1 as our current situation is very volatile on current hardware with failed RAID-arrays.
- Thinking to install 1.2.3 on some temp machine and bring 1.2.3 config in there, upgrade that machine to 2.0.1 and then export those files to my new firewall, is there any reason why I shouldn't do this? (Can this be done on virtual machine? Amount of NIC's is the key?)
2x HP ProLiant DL140
2.0.1-RELEASE (pre-installed embedded system, we could have selected 1.2.3 version too, but thought 2.0.1 would be better at that time..)
OPNsense Quad Core HD rack edition - 19" pfSense appliance with 4Gb Compact Flash SLC
Booting up the system after Restore "ALL" on new system with old config.xml:
Starting device manager (devd)…done.
Updating configuration...........................Migrate RRD database WANGW-quality.rrd to new format
Migrate RRD database VPN2-quality.rrd to new format
Migrate RRD database SROUTE0-quality.rrd to new format
Migrate RRD database ISPGW-quality.rrd to new format
Migrate RRD database GW_WAN-quality.rrd to new format
Migrate RRD database GW_OPT27-quality.rrd to new format
Migrate RRD database GW-quality.rrd to new format
Migrate RRD database AccessToOld-quality.rrd to new format
Migrate RRD database wan-traffic.rrd to new format
Migrate RRD database wan-packets.rrd to new format
Migrate RRD database opt9-traffic.rrd to new format
Migrate RRD database opt9-packets.rrd to new format
Migrate RRD database opt8-traffic.rrd to new format
Migrate RRD database opt8-packets.rrd to new format
Migrate RRD database opt7-traffic.rrd to new format
Migrate RRD database opt7-packets.rrd to new format
Migrate RRD database opt6-traffic.rrd to new format
Migrate RRD database opt6-packets.rrd to new format
Migrate RRD database opt5-traffic.rrd to new format
pid 483 (rrdtool), uid 0 inumber 69 on /tmp: filesystem full
/tmp: write failed, filesystem is full
Warning: file_get_contents(/tmp/opt5-traffic.rrd.tmp.xml): failed to open stream: No such file or directory in /etc/inc/upgrade_config.inc on line 2044
Warpining: Invalid argument supplied 26for foreach() in2 /etc/inc/rrd.in(c on line 74
Warning: Invalid argument supplied for foreach(), u in /etc/inc/rrdid.inc on line 89
inumber 72 on /tmp: filesystem full
/tmp: write failed, filesystem is full
.......and it goes on and on with fails and warnings for hundred of lines until the end, where all VLAN's are showing and normal menu appears, and version shows up correct
[2.0.1-RELEASE][firstname.lastname@example.org]/root(1): df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ufs/pfsense0 1.8G 180M 1.5G 11% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/md0 38M 38M -3.1M 109% /tmp
/dev/md1 58M 38M 15M 72% /var
/dev/ufs/cf 48M 1.1M 43M 2% /cf
At that point, webinterface does not work and funnily enough "Reboot system" does not work either because "/tmp: write failed, filesystem is full" :) Deleting some files from /tmp helps, restarting the WebConfigurator brings webinterface up nicely.
In webinterface it seems that everything got restored fully apart from the theme. Rules, NAT, interfaces, IPSEC settings, its all there..
So my question is… what were all the errors then? Could this be a working system now? I have to check settings tomorrow thoroughly and check if anything is missing...
Any response is welcome, also wondering if anyone is familiar with this kind of a situation.
What's in /tmp? Should be virtually nothing there and definitely not enough to run out of space.
I have not seen /tmp in any other pfSense installations before, seems to be the thing with this hardware though.
**edit, this folder had .log, .status, .lock, .boot files in it, about 500Kb stuff in it.
After thoroughly checking all the rules and settings after 1.2.3 -> 2.0.1 upgrade, it seems that everything actually went through cleanly, with the minor exception of one gateway not being assigned correctly on filters, but it was easy to fix.
I guess this issue of mine is resolved, so if everything goes "success" on production I let you know.
Everything went smooth after we put this new shiny hardware and PfSense version into production, except CARP failover isn't working as it should…
Firewall1 shows all as MASTER -- but Firewall2 shows all external (WAN) VIP's as MASTER too, all the rest VIP's are on BACKUP status.
There seems to be some sort of collision on WAN when both firewall's are connected, WAN works but there is huge packet loss and speeds are below average.
When we unplug the WAN cable from firewall2, everything goes into normal.
There does not seem to be anything wrong with IP addresses between firewalls. They have their own IP addresses.
CARP settings are enabled only on Firewall1, (State sync & sync interface, peer ip and XMLRPC Sync)
So far tried to reboot both firewalls in turn, disabling/enabling CARP on both firewalls. And some Googling suggested us to try direct cable between firewall WAN ports so firewalls could communicate with each other and setup the MASTER/BACKUP settings correctly, but this did not help..
Could there be something wrong in 1.2.3 settings that were converted into 2.0.1?
What about packages, we had bunch of packages in 1.2.3, and now none of those. (From Upgrade guide: "When upgrading 1.2.3 to 2.0, you should uninstall all packages first, then perform the upgrade.")
That's what happens when the firewalls can't talk to each other on the WAN, or at least can't pass multicast between them on the WAN though usually it's that they can't talk at all.