RAM disk on upgrade from 2.4.4 to 2.4.5p1 and kernel memory
-
Been running 2.4.4 for over a year; no issues. /tmp ram disk was set to 1024 (1GB) and the /var ram disk was set to 2048 (2Gb). On upgrade to 2.4.5p1 I was told the RAM disk couldn't be setup and to check settings. On checking setting it said the maximum total size of all RAM disks cannot exceed available kernel memory, and it quoted a maximum of 350 MiB. Well my hardware hasn't changed; it still has 8GB of RAM and the dashboard is showing 8% used. Why am I having this problem? I set the RAM disk to 64/256 for now so I wouldn't kill the storage with writes; however, I would like it to be larger. Interestingly, since rebooting with the 64/256, it now says "Maximum total size of all RAM disks cannot exceed available kernel memory: 692.58 MiB". So with RAM disks enabled I now suddenly have almost 700 MiB more kernel memory. So... instead of reducing the available memory by the amount used, it seems to be INCREASING it with the amount used. What on earth is going on here. T
-
And here is the memory situation:
SYSTEM MEMORY INFORMATION:
mem_wire: 736890880 ( 702MB) [ 8%] Wired: disabled for paging out
mem_active: + 253931520 ( 242MB) [ 3%] Active: recently referenced
mem_inactive:+ 63934464 ( 60MB) [ 0%] Inactive: recently not referenced
mem_cache: + 0 ( 0MB) [ 0%] Cached: almost avail. for allocation
mem_free: + 7180591104 ( 6847MB) [ 87%] Free: fully available for allocation
mem_gap_vm: + -180224 ( 0MB) [ 0%] Memory gap: UNKNOWN
mem_all: = 8235167744 ( 7853MB) [100%] Total real memory managed
mem_gap_sys: + 267194368 ( 254MB) Memory gap: Kernel?!
mem_phys: = 8502362112 ( 8108MB) Total real memory available
mem_gap_hw: + 87572480 ( 83MB) Memory gap: Segment Mappings?!
mem_hw: = 8589934592 ( 8192MB) Total real memory installed
SYSTEM MEMORY SUMMARY:
mem_used: 1345409024 ( 1283MB) [ 15%] Logically used memory
mem_avail: + 7244525568 ( 6908MB) [ 84%] Logically available memory
mem_total: = 8589934592 ( 8192MB) [100%] Logically total memory
-
I will answer myself. I had to make this change:
vm.kmem_size="4096M"Has there been a change to the way ram disks are allocated? I think it used to be more dynamic. Now it looks like it wants to reserve all the RAM. Not really an issue in my use case, but curious.
[2.4.5-RELEASE][root@stonewall.cedar-republic.com]/root: sh freebsd-memory.sh
SYSTEM MEMORY INFORMATION:
mem_wire: 3750756352 ( 3577MB) [ 45%] Wired: disabled for paging out
mem_active: + 273117184 ( 260MB) [ 3%] Active: recently referenced
mem_inactive:+ 63623168 ( 60MB) [ 0%] Inactive: recently not referenced
mem_cache: + 0 ( 0MB) [ 0%] Cached: almost avail. for allocation
mem_free: + 4121100288 ( 3930MB) [ 50%] Free: fully available for allocation
mem_gap_vm: + 26570752 ( 25MB) [ 0%] Memory gap: UNKNOWN
mem_all: = 8235167744 ( 7853MB) [100%] Total real memory managed
mem_gap_sys: + 267194368 ( 254MB) Memory gap: Kernel?!
mem_phys: = 8502362112 ( 8108MB) Total real memory available
mem_gap_hw: + 87572480 ( 83MB) Memory gap: Segment Mappings?!
mem_hw: = 8589934592 ( 8192MB) Total real memory installed
SYSTEM MEMORY SUMMARY:
mem_used: 4405211136 ( 4201MB) [ 51%] Logically used memory
mem_avail: + 4184723456 ( 3990MB) [ 48%] Logically available memory
mem_total: = 8589934592 ( 8192MB) [100%] Logically total memory
-
It's this: https://redmine.pfsense.org/projects/pfsense/repository/revisions/82bf21fc37ef436f0d8439a84a97d61a7d5979b6
When we moved to FreeBSD 11-stable for 2.4.5 one of the early issues we hit (I hit it myself) was devices with ram drives failing to boot after upgrade.
We only initially hit it on 32bit ARM devices because they have far less kernel memory available to use. It actually affected all devices.
It turns out that the memory disk creation code was updated to check if there was enough space to create the drives and fail out if there is not. Previously it would allow creation of any size drive whether or not there was actually enough memory to accommodate that leading to crashes if the drive was ever actually filled beyond the available space.
So, yes, ramdrives are limited in 2.4.5 but that's a good thing.
And there is almost no reason to have drives that large. 64/256 is large enough for almost all expected use. I usually use double the defaults, 80/120MB.Steve
-
Thank you for the explanation; makes sense. In essence the RAM disk allocation has moved from a "thin" provision to a "thick" provision. And yes, I know the disks are considerably larger than I need, especially since I send everything to a remote syslog, and the local logs are capped. I did it because I have a lot more RAM in the system than I really need, and was still wondering what extra data and/or graphs I could capture. If I ever find a need to run something that needs RAM I will reduce these values as needed.