Warning: fopen(/tmp/config.lock): failed to open stream
-
I have just concerned with one of my colleague, he has installed AMD64 but faced the same issue.
Warning: fopen(/tmp/config.lock): failed to open stream: Device not configured in /etc/inc/util.inc on line 123 Warning: flock() expects parameter 1 to be resource, null given in /etc/inc/util.inc on line 134 Warning: fclose(): supplied argument is not a valid stream resource in /etc/inc/util.inc on line 135 Warning: session_start(): open(/var/tmp//sess_fd59264b4e5cf3b6ae1121f1ccf4762a, O_RDWR) failed: No space left on device (28) in /etc/inc/auth.inc on line 1254
More likely than not, the disk is dead. If you have multiple units doing it, then it wouldn't be the first time someone got a bad batch of SSDs. We've seen similar bad batches with both Kingston and Crucial in the past.
It comes down to one of three possibilities:
1. The drive really is full, or at least /var and /tmp both are (unlikely)
2. The disk is dead, in which case it's a coin toss as to whether the system will come back after a reboot
3. The disk has stopped responding to the operating system due to a controller issue, which may be OK after a reboot. Sometimes there are firmware updates for SSDs to help with such problems.More info here:
https://doc.pfsense.org/index.php/Filesystem_Full_/_Out_of_Inode_Errors -
Hi Steve,
Yes it is identical on all three sites,
These Crucial SSDs are brand new hard drives.
Thanks
-
Well, as Jim said, they could potentially be from a bad batch of drives if you bought them all together or maybe they have bad firmware that can be updated. :-\ Have you tried using any other drives?
Steve
-
the same probelm i was facing and it is with SATA harddisk now i used ide harddisk my luck i found 40 gb IDE hard disk and than i chang from bios hard disk detection auto to LBA(Logical Block Addressing) now it is working since two days hope it has solved my probelm.
-
Hi Guys,
We have faced the issue again on one of our site and we need to rebuilt the firewal as it was crash after we reboot.
Please let me know if any one has solution for this.
Thanks
-
Have you checked the SMART parameters from these SSDs? Firmware updates?
Steve
-
Hi Steve,
Yes, we have run SMART parameter on SSD its seems fine and yesterday one more pfsense failed with the same error.
Please have a look on SMART report.
Thanks
-
the same issue was with me. before i was using sata 500GB and i also check 320GB but now i am using 40GB IDE but first you check from bios go to device information page there press enter by selecting HD chnage the reading method from auto to LBA . this trick work for me now pfsense is working fine since two weeks.
-
@ishtiaqaj:
the same issue was with me. before i was using sata 500GB and i also check 320GB but now i am using 40GB IDE but first you check from bios go to device information page there press enter by selecting HD chnage the reading method from auto to LBA . this trick work for me now pfsense is working fine since two weeks.
ishtiaqaj, we are not using IDE hard disk, we are using SSD hard disks and for few weeks our firewall is also working fine but its gives error in between after few weeks.
-
@ishtiaqaj:
the same issue was with me. before i was using sata 500GB and i also check 320GB but now i am using 40GB IDE but first you check from bios go to device information page there press enter by selecting HD chnage the reading method from auto to LBA . this trick work for me now pfsense is working fine since two weeks.
ishtiaqaj, we are not using IDE hard disk, we are using SSD hard disks and for few weeks our firewall is also working fine but its gives error in between after few weeks.
dear tarun, you should also check bios hd option there are three option lbs chs and large you should try large mai b it solve your probelm. yes the firewall work someday and than down…
-
The smart parameters all look reasonable.
You said you tried multiple drives but were they all identical SSDs? As Jim suggested here and Chris in your other thread it's not unheard of for a whole batch of drives to have firmware bug that your particular setup is triggering.
Have you tried running from a standard SATA drive as a test?Steve
-
I have encountered the same problem, when I activated
System: Advanced: Miscellaneous
Use RAM Disks with default /tmp (40 mb) /var (60 mb) ram disk sizes and
with 3hrs periodic frequenciesI had to restore configuration to an earlier state without these settings activated via ssh and reboot. (uninstalled havp package & dashboard widget, but I don't think that was the issue)
I can finally log back in via web interface.
here are some info without these settings set up
__________________________________ 2.1.5-RELEASE (amd64) built on Mon Aug 25 07:44:45 EDT 2014 FreeBSD 8.3-RELEASE-p16 Intel(R) Core(TM)2 Duo CPU E7400 @ 2.80GHz 2 CPUs: 1 package(s) x 2 core(s) Memory usage 18% of 4073 MB SWAP usage 0% of 8192 MB Disk usage 1% of 443G ____________________________ Installed packages: Cron 0.1.9 Lightsquid 2.41 pfBlocker 1.0.2 Sarg 2.3.6_2 pkg v.0.6.3 squid3 3.1.20 pkg 2.1.2 squidGuard-squid3 1.4_4 pkg v.1.9.12 _________________________________ output of pkg_info bsdinstaller-2.0.2014.0430 BSD Installer mega-package compat6x-amd64-6.4.604000.200810_3 A convenience package to install the compat6x libraries gettext-0.18.1.1 GNU gettext package libexecinfo-1.1_3 A library for inspecting program's backtrace libiconv-1.14 A character set conversion library p5-Digest-HMAC-1.03 Perl5 interface to HMAC Message-Digest Algorithms p5-IO-Socket-SSL-1.52 Perl5 interface to SSL sockets p5-Net-SMTP-TLS-0.12_1 An SMTP client supporting TLS and AUTH p5-Net-SSLeay-1.42 Perl5 interface to SSL perl-5.12.4_3 Practical Extraction and Report Language pkg_info: the package info for package 'pkg' is corrupt pkgconf-0.8.9 Utility to help to configure compiler and linker flags python27-2.7.3_3 An interpreted object-oriented programming language samba36-smbclient-3.6.7 Samba "ftp-like" client talloc-2.0.7 Hierarchical pool based memory allocator tdb-1.2.9,1 Trivial Database
-
There's a good chance the HAVP package could have filled /tmp there. That's normally only applied in embedded (Nano) installs where running HAVP is not recomended.
Increase the size of /tmp if that's the case.Steve