Virtual Disk capacity shows different from the disk capacity shown in WebGUI
-
Help!
Initially I ran out of disk storage, virtual disk.
I am using Oracle VM Virtual Box and have set my storage to:
1. Type: (Normal) VMDK
2. Virtual Size: 10GB
3. Details: Dynamically allocated storageAfter a month my storage hit 100%, so that Dynamic thing did not function at all so I have to manually adjusted it using Clone VDI Tool. So my VMDK was converted to VDI and I was able to adjust it from 10GB to 999GB (I have 1TB physical HD). I was able to mount it as my new virtual drive and virtua box manager shows I already have:
1. Type: Normal (VDI)
2. Virtual Size: 999GB
3. Actual Size: 7.82
4. Details: Dynamically allocatedI logged to my WebGUI and can see from the dashboard that I still have
Disk Usage: 100%
And my antivirus has stopped, maybe perhaps there's no more room to download its daily update.
Is there anything that I need to "apply" or reconfig in order to get my WebGUI read the new 999GB disk storage?
Any advise is much appreciated.
–Nubee
-
The issue is (I think) that even though you have expanded your disk capacity you have not expanded your partitions.
I would not bother with that crap. I'd reinstall on the new partition, using entire space (but you really shouln't need more than 50GB or so. 100GB would be super safe.) Then restore settings.
Otherwise, check out pages on freebsd partition size adjustments.
-
But re-installing would erase all the vouchers I generated, right?
I tried to check with freebsd website but I just can find the steps on how to make adjustment on the partition, could you help please?
-
Could this work?
http://bsdbased.com/2009/11/30/grow-freebsd-ufs-filesystem-on-vmware-hdds
-
I have never used that set of directions, but yeah - thats pretty much the goat-roap you have to go through.
-
In windows….that only takes seconds to do. In FreeBSD everything has to be shut down and mangled with.
Waste of time due to poor design.
-
I'm not sure if onlineph would lose his vouchers or not if he just backed up, wiped, re-installed and restored.
The fault isn't freebsd's. The fault was making the mistake of allocating such a small amount of drive.Properly expanding things is worth a try, but if that fails, I'd just go with plan B ^^^^^
-
But shouldnt a modern OS be able to do that effertolessly and with out downtime??
-
Not if its meant to be lightweight and headless. Otherwise, sure.
For instance, Ubuntu has a disk utility that knocks that stuff right out in the desktop version.
Go to the server install of the headless sort and its back to command line.
-
The funny part is that it could be integrated into the GUI of Pfsense fairly easy….
Why is it not in there since its used widely in prduction environments?
-
I'm guessing its because just reinstalling and restoring settings is so easy and this sort of problem isn't an every-day occurrence? But sure. More tools are nice. I agree.
-
Hi,
I'm still figuring out how to point pfsense to acknowledge my newly expanded 999GB. And everynow and then my box loses it connection I mean in the LAN end. (no problem with WAN always in up mode) I thinks it's an effect of the Disk Full thing. So I had to restart to restore connection to the LAN side. Also I am seeing a "/:write failed, file system is full" This error only occurred the time I hit 100% full disk.
Would making pfsense acknowledge the 999GM virtual disk erase the "write failed" error? of is this just another error?
:-[
-
Yes - It would fix your problems, but I think rather than fooling with it you should backup your settings, allocate about a 64gb virtual disk to pfsense or maybe 100GB at most and then restore settings.
Alternatively, you could also blow away the squid cache for now. I assume its the expansion of the squid cache that is actually the source of all these problems? Mine grows up to about 20GB and then just sort of fluctuates about that size. It actually shrinks and grows.
The commands to flush the squid cache are:
In web gui, stop the squid process.
Then go to the pfsense console.
Go to shell:
cd /var/squid/cache
rm -rf *
squid -zthen reboot pfsense.
(This will probably free up alot of your space and temporarily fix your issues for a month but you need to do the permanent fix eventually)
-
Okay, I'll give it a try crossing my fingers.
And oh, I am using lusca-cache instead. Is that the same as the squid thing? Is this the one that I need to disable? Is the the squid cache? I'm really sorry I guess this is a very elementary question but you see, I'm still learning the hard way ::)
I assume also that its the cached files that really occupied my space. Hmm actually, the disk went full right after trying to configure the OpenVPN. When I activated the OpenVPN it hanged first then, my console started to populate "write failed" thing so I just deleted and deinstalled the "OpenVPN Client Export Utility" but the error has stayed and swarmed my storage, argh!
-
lusca-cache - Its like squid but it caches ALOT more aggressively.
Still, I'd say 100GB is all you should need. You should also cap the Lusca cache at 40GB or so, otherwise you will start having RAM issues also. How much RAM is there?
for lusca, process seems same:
clear ninyo muna kaya yung cache ng lusca
rm -r /var/squid/cache
after it finish restart ninyo yung pfsense box
then type squid -zSana ay ok.
-
I have 4GB physical RAM attached.
I know this is another elementary question but where can I cap lusca cache? Can I do it from the GUI?
And yes, I will clear it eventually. But will it not affect the speed of the internet access of my clients?
-
No not the speed but the delivery of cached content…until its cached again.
-
Yeah - It will effect clients on the box, sure until its re-cached as supermule said. But less so than having a pfsense thats crashing due to full disk.
Also, lusca / squid has to index HHD cache in ram, so you can't just have an arbitrarily huge disk cache.
Lets say you are OK with squid consuming half your RAM to ONLY index disk cache. Figure you need 5% of disk cache size to index those files.
I have observed that it seems close to 4% but I like to be safe.
So, take 100 / 5 = 20 and then 20 *2GB (because you are OK with using 2GB just for indexing cache) = 40GB.
If the files are mostly huge on disk, there will be fewer of them so less ram required to index it.Anyway - Yes. You set the maximum cache size in services > proxy server > cache management
at the top, 40GB is about 40960MB so put that number in the top.
(What is that number now)?
** Warning… I think when you change that number it will force you to remake squid cache so whatever is currently cached will be lost. **
(No sooner than I type this someone will say they are running 2TB of cache on a system with 1GB of RAM. Cool - but it will crash eventually)
Anyway - You are not going to get off easy. You are going to lose your current cache more than likely no matter what you do. May as well just make it nice and stable all at once.
-
Now I'm learning…
As a point of discussion here is my current Proxy server: Cache management configuration, please let me know if I am over doing it and I really appreciate corrections:
Hard disk cache system: coss+aufs
coss Hard disk cache size: 50
Hard disk cache size: 900000
Object size: 4
Memory cache size: 8
Max memory object size: 4
Minimum object size: 0
Maximum object size: 1024
Level 1 subdirectories: 16
Memory replacement policy: HEAP GDSF
Cache replacement policy: Heap LFUDA
Low-water-mark in %: 90
High-water-mark in %: 95Please don't ask me about those values as honestly I don't know what they are what they are.
Okay so where field can I put the 40960 value?And is there any configuration that I need to correct from my current config?
You mentioned that if I will change the number it will force to remake the squid cache so whatever cached will be lost, would that mean there's no need for me to rm -r /var/squid/cache ?
Thanks and advance.
-
Well - The one you really need to make smaller is 900000 - Thats nearly a TB
Unless you have 36GB of RAM in your box to spare? I'd set it to 40960 or so.
Thats after you reinstall, of course on a virtual disk that will hold it all.