pfSense memory usage



  • We do use a pfSense stack with 2 members, running both on 2.4.4-RELEASE-p3, virtual, running on ESX 6.something. This setup uses CARP to replicate/provide failover. Both are running fine but the backup/secondary node shows much higher memory usage than the primary member. We do not have any issues in performance or availability at this moment but I am wondering what causes this usage. If I restart the node, memory usage will be lower but slowly growing up to about 60% and higher after about 20-25 days.

    dashboard
    stats
    system_activity

    Both nodes are running the same release, have been upgraded from 2.2.something and have the same services/packages and hardware specs.

    2.4.4-RELEASE-p3 (amd64)
    built on Wed May 15 18:53:44 EDT 2019
    FreeBSD 11.2-RELEASE-p10

    List of installed packages:

    • cron
    • haproxy
    • nmap
    • open-vm-tools
    • openvpn-client-export
    • sudo

    Current states:
    State Table Total Rate
    current entries 797
    searches 260031939 163.6/s

    Aliases: about 30 (groups)
    Firewall rules: 5 zones, about 80 rules in total

    Found a lot of related info but none with a solution...
    https://forum.netgate.com/topic/130622/is-high-memory-usage-normal
    https://forum.netgate.com/topic/50032/high-memory-usage/9
    https://forum.netgate.com/topic/61420/memory-leak
    https://forum.netgate.com/topic/4667/possible-memory-leak/5
    https://www.reddit.com/r/PFSENSE/comments/bg1ogg/wired_memory_slowly_creeping_up/
    https://redmine.pfsense.org/issues/8249
    https://forum.netgate.com/topic/47513/memory-usage-climbing/9
    https://redmine.pfsense.org/issues/2819
    https://forum.netgate.com/topic/130396/wire-memory-slowly-increasing/10

    Can someone help me how to find the cause, we do use snmp monitoring, what brought me to this question. The memload does not get lower when I restart the (running) services, FPM-service or webconfigurator.


  • Netgate Administrator

    Does it just continue to grow if you don't restart the node?

    And the primary does not show that?

    Try running top at the command line instead of using Diag > System Activity. Then sort by size instead of cpu usage.
    Compare that output on both nodes.

    Steve



  • Hi, thanks for your reply. Current status on dashboard is "62% of 2000 MiB" so it is slightly growing. Primary node doesn't.
    as requested, top -aSH , sorted by size:

    ===Backup appliance

    last pid:  2448;  load averages:  0.32,  0.28,  0.26                                                                                                                          up 19+10:23:42  15:39:42
    204 processes: 3 running, 154 sleeping, 47 waiting
    CPU:  0.2% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.8% idle
    Mem: 27M Active, 433M Inact, 1170M Wired, 198M Buf, 317M Free
    Swap: 4096M Total, 4096M Free
    
      PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
    34541 root           20    0 99820K 39164K accept  1   0:21   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
    34864 root           52    0 99820K 39128K accept  0   0:21   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
    47659 root           20    0 97772K 38560K accept  0   0:21   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
    74679 root           52    0 97772K 38288K accept  0   0:00   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
    30427 root           20    0 88384K 28712K kqread  1   0:03   0.00% php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-fpm)
    37340 root           20    0 67592K 61624K select  1   4:05   0.00% /usr/sbin/bsnmpd -c /var/etc/snmpd.conf -p /var/run/snmpd.pid
    52219 root           20    0 48060K 43496K select  0  14:14   0.04% /usr/local/bin/vmtoolsd -c /usr/local/share/vmware-tools/tools.conf -p /usr/local/lib/open-vm-tools/plugins/vmsvc
    67851 unbound        20    0 38172K 18752K kqread  1   0:00   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
    67851 unbound        20    0 38172K 18752K kqread  1   0:00   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
    43029 root           20    0 37904K 18280K select  0   0:01   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           31    0 37904K 18280K sigwai  0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           31    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           31    0 37904K 18280K select  1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    43029 root           31    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    94154 root           20    0 23596K 10304K kqread  1   0:10   0.00% nginx: worker process (nginx)
    93939 root           20    0 23596K  9620K kqread  0   0:06   0.00% nginx: worker process (nginx)
    93731 root           52    0 21548K  7752K pause   1   0:00   0.00% nginx: master process /usr/local/sbin/nginx -c /var/etc/nginx-webConfigurator.conf (nginx)
    27209 root           20    0 12908K  9488K select  0   0:00   0.02% sshd: admin@pts/2 (sshd)
    14001 root           20    0 12616K  8824K select  1   0:00   0.00% /usr/sbin/sshd
    17482 root           20    0 12400K 12504K select  0   0:07   0.01% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd}
    86276 root           20    0 11912K  7772K piperd  0   0:00   0.00% /usr/local/libexec/sshg-parser
    26928 root           20    0 10216K  6516K select  1   0:10   0.00% /usr/local/sbin/openvpn --config /var/et
    

    ===

    ===Master appliance

    last pid: 69934;  load averages:  0.10,  0.14,  0.15                                                                                                                          up 19+10:13:34  15:40:24
    210 processes: 3 running, 160 sleeping, 47 waiting
    CPU:  0.2% user,  0.0% nice,  0.2% system,  0.2% interrupt, 99.4% idle
    Mem: 72M Active, 420M Inact, 373M Wired, 198M Buf, 1082M Free
    Swap: 4096M Total, 4096M Free
    
      PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
    13320 root           52    0 99820K 40504K accept  0   0:09   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
    72977 root           52    0 99820K 40488K accept  0   0:10   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
     5805 root           52    0 99820K 40368K accept  0   0:04   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
    72353 root           20    0 97772K 39788K accept  0   0:07   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
    63653 root           24    0 97772K 39604K accept  0   0:06   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
      709 root           20    0 88384K 26292K kqread  0   0:41   0.00% php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-fpm)
    44910 root           20    0 67592K 62272K select  1  33:51   0.01% /usr/sbin/bsnmpd -c /var/etc/snmpd.conf -p /var/run/snmpd.pid
    63513 root           20    0 48060K 43420K select  0  14:29   0.04% /usr/local/bin/vmtoolsd -c /usr/local/share/vmware-tools/tools.conf -p /usr/local/lib/open-vm-tools/plugins/vmsvc
     4081 unbound        20    0 38172K 18780K kqread  1   0:01   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
     4081 unbound        20    0 38172K 18780K kqread  1   0:01   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
    42282 root           20    0 37904K 21536K uwait   1   0:07   0.01% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K select  1   0:10   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   1   0:23   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           47    0 37904K 21536K sigwai  1   0:02   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    42282 root           20    0 37904K 21536K select  1   0:01   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
    45159 www            20    0 30436K 21912K kqread  1  78:33   0.61% /usr/local/sbin/haproxy -f /var/etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 15930
     3197 root           20    0 23596K  9456K kqread  0   0:11   0.00% nginx: worker process (nginx)
     2977 root           20    0 23596K  9428K kqread  0   0:04   0.00% nginx: worker process (nginx)
     2816 root           52    0 21548K  7692K pause   1   0:00   0.00% nginx: master process /usr/local/sbin/nginx -c /var/etc/nginx-webConfigurator.conf (nginx)
     9925 root           20    0 12908K  9456K select  1   0:00   0.01% sshd: admin@pts/0 (sshd)
    14363 root           20    0 12616K  8804K select  0   0:14   0.00% /usr/sbin/sshd
     7964 root           20    0 12400K 12504K select  0   1:47   0.00% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd}
    98280 root           20    0 11912K  7772K piperd  0   0:04   0.00% /usr/local/libexec/sshg-parser
    

    ===


  • Netgate Administrator

    Hmm, well it's all wired usage by the looks of it. Nothing looks dramatically incorrect there really.
    I would want to see if that continues to climb if you don't reboot it.

    Steve



  • Memory usage is still growing, today's update:

    ===
    Uptime 28 Days 05 Hours 55 Minutes 19 Seconds
    Memory usage 81% of 2000 MiB

    203 processes: 3 running, 153 sleeping, 47 waiting
    
    Mem: 12M Active, 307M Inact, 1558M Wired, 198M Buf, 71M Free
    Swap: 4096M Total, 4096M Free
    

    ===

    Still no issues found besides our monitoring check that complains about memory usage :-)



  • Another week has passed, host is using 98% of 2000MiB ram now, still alive and nothing useful is being logged, it has started to use swap as well, currently using 6%. Console (ssh) responds a bit slow but still operates.

    I am a bit desperate for the next step, wait for release 2.5, another 2.4.4 patch, reinstall this host, reboot it once a month...

    Primary node is using only 22% of its ram, same uptime.



  • System ran out of swap last night and killed all processes:

    pid 52219 (vmtoolsd), uid 0, was killed: out of swap space
    pid 37340 (bsnmpd), uid 0, was killed: out of swap space
    pid 99704 (netstat), uid 0, was killed: out of swap space
    pid 16857 (php-fpm), uid 0, was killed: out of swap space
    pid 93238 (php-fpm), uid 0, was killed: out of swap space
    pid 55684 (php-fpm), uid 0, was killed: out of swap space
    pid 67420 (php-fpm), uid 0, was killed: out of swap space
    pid 14021 (php-fpm), uid 0, was killed: out of swap space
    pid 4367 (php-fpm), uid 0, was killed: out of swap space
    pid 57915 (php-fpm), uid 0, was killed: out of swap space
    pid 82469 (php-cgi), uid 0, was killed: out of swap space

    WebGui died, unable to start it again, console / SSH was available but very slow on response.
    I had to reboot the VM to get it online again. Memory usage @14% after reboot.



  • @marcvw said in pfSense memory usage:

    /usr/local/libexec/ipsec/charon

    I compared your (visible) processes with mine.
    True, something is eating memory. Up to you to find out what process is doing so.

    I'm not using this : /usr/local/libexec/ipsec/charon : can you 'kill' ipsec for some time ?

    I do not use haproxy.

    Btw : I'm using Munin because memory that should be tracked of time - among many other things.



  • Thanks, I have stopped the ipsec service (which killed the processes), we need this because we use ipsec/vpn but to keep track on this issue I can stop it for a while. Despite it has been stopped for some time now the usage of wired mem still grows but I don't see a big difference in- or decrease in usage. I will re-check the values on Monday.



  • Rechecked the memory usage growth, same as before, enabled the ipsec service again. I will continue monitor and troubleshoot this appliance...


  • Netgate Administrator

    So disabling IPSec had no significant effect on the memory usage growth?



  • @stephenw10

    No, it does still grow. Waiting for pfSense 2.5 / FreeBSD 12 ;-)


  • Netgate Administrator

    Try a snapshot if you can. Now is the time to be finding bugs that may still exist there.

    Steve


  • LAYER 8

    Hi Everyone,
    I guess I am also facing a similar issue at atleast three different sites. All the locations have similar configuration :

    2.4.4-RELEASE (amd64)
    built on Thu Sep 20 09:03:12 EDT 2018
    FreeBSD 11.2-RELEASE-p3

    Version 2.4.4_3 is available.

    Packages Installed :

    Openvpn_client
    squid
    squidguard
    cron
    sudo
    mailreport

    The memory usage is growing in each of these device. A sample output of top -aSH command is as below :

    last pid: 34046;  load averages: 23.91,  9.50,  5.23  up 216+08:05:06    18:25:32
    926 processes: 3 running, 903 sleeping, 20 waiting
    
    Mem: 149M Active, 498M Inact, 860M Laundry, 1762M Wired, 200K Buf, 91M Free
    ARC: 1223M Total, 220M MFU, 893M MRU, 33K Anon, 9466K Header, 100M Other
         971M Compressed, 1632M Uncompressed, 1.68:1 Ratio
    Swap: 4096M Total, 737M Used, 3359M Free, 17% Inuse
    
    
      PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
       11 root          155 ki31     0K    32K RUN     0 5017.4 100.00% [idle{idle: cpu0}]
       11 root          155 ki31     0K    32K CPU1    1 5070.2  97.66% [idle{idle: cpu1}]
    86614 root           21    0 92624K 26100K piperd  0   0:05   0.88% php-fpm: pool nginx (php-fpm){php-fpm}
    39125 squid          20    0  2442M   986M kqread  1  38.8H   0.10% (squid-1) -f /usr/local/etc/squid/squid.conf (squid)
       12 root          -92    -     0K   320K WAIT    1 556:47   0.10% [intr{irq261: re1}]
       12 root          -92    -     0K   320K WAIT    0 430:51   0.10% [intr{irq259: re0}]
        0 root          -92    -     0K  4304K -       1  50.9H   0.00% [kernel{dummynet}]
       12 root          -60    -     0K   320K WAIT    1 495:42   0.00% [intr{swi4: clock (0)}]
        0 root          -92    -     0K  4304K -       0 213:13   0.00% [kernel{em0 taskq}]
        0 root          -12    -     0K  4304K -       0 136:11   0.00% [kernel{zio_write_issue}]
       18 root          -16    -     0K    16K pftm    0 126:00   0.00% [pf purge]
       12 root          -72    -     0K   320K WAIT    1  36:47   0.00% [intr{swi1: netisr 0}]
        6 root           -8    -     0K   160K tx->tx  1  30:40   0.00% [zfskern{txg_thread_enter}]
    26882 root           20    0 12908K 13012K select  0  30:22   0.00% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd}
       19 root          -16    -     0K    16K -       1  26:51   0.00% [rand_harvestq]
       20 root          -16    -     0K    48K psleep  0  23:05   0.00% [pagedaemon{dom0}]
    73517 squid          20    0  9948K  2740K select  0  18:55   0.00% (pinger) (pinger)
    46320 squid          20    0  9948K  2740K select  1  18:39   0.00% (pinger) (pinger)
    

    Any Pointers,
    Regards,
    Ashima


  • Netgate Administrator

    You need to sort that output by 'RES' to see what's actually using it but we can see Squid is using close to 1G so that probably needs tuning.

    Steve


  • LAYER 8

    Thanks @stephenw10 for pointing that.

    Now the memory usage reached 98% so I just rebooted the machine. It's reduced to 21%. Btw I have 4GB RAM.

    Can you have a look at the top output at 2nd location. (I don't know how to sort with RES ). Please note in this machine squid is using 367M. The squid configuration is exactly same in all these locations. Only difference between two locations is difference in Uptime days. (former was ~200 days and latter is ~35 days)

    
    ```  last pid: 87708;  load averages:  0.73,  1.01,  0.87  up 35+12:14:17    21:28:52
    556 processes: 3 running, 526 sleeping, 27 waiting
    
    Mem: 119M Active, 819M Inact, 478M Laundry, 2056M Wired, 204K Buf, 342M Free
    ARC: 1547M Total, 643M MFU, 817M MRU, 3920K Anon, 12M Header, 71M Other
         1323M Compressed, 4466M Uncompressed, 3.38:1 Ratio
    Swap: 4096M Total, 15M Used, 4081M Free
    
    
      PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
       11 root       155 ki31     0K    32K CPU0    0 818.6H  97.46% [idle{idle: cpu0}]
       11 root       155 ki31     0K    32K RUN     1 824.4H  92.58% [idle{idle: cpu1}]
    85280 root        20    0   294M   258M nanslp  1  21:16   0.88% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i em1 -i em1.101 -i em1.201 -i em1.103 --dns
      345 root        21    0 92880K 26292K piperd  0   0:22   0.78% php-fpm: pool nginx (php-fpm){php-fpm}
    85280 root        20    0   294M   258M nanslp  1  11:23   0.29% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i em1 -i em1.101 -i em1.201 -i em1.103 --dns
    85280 root        20    0   294M   258M nanslp  0  11:07   0.10% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i em1 -i em1.101 -i em1.201 -i em1.103 --dns
        0 root       -92    -     0K  4320K -       0 480:45   0.00% [kernel{dummynet}]
       12 root       -92    -     0K   432K WAIT    0 116:50   0.00% [intr{irq259: re0}]
        0 root       -12    -     0K  4320K -       0 116:36   0.00% [kernel{zio_write_issue}]
    56232 squid       20    0   434M   367M kqread  1 112:07   0.00% (squid-1) -f /usr/local/etc/squid/squid.conf (squid)
        0 root       -92    -     0K  4320K -       1  89:54   0.00% [kernel{em1 taskq}]
       12 root       -60    -     0K   432K WAIT    1  80:04   0.00% [intr{swi4: clock (0)}]
       18 root       -16    -     0K    16K pftm    0  19:53   0.00% [pf purge]
    86720 unbound     20    0 81016K 67500K kqread  1  19:23   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
    86720 unbound     20    0 81016K 67500K kqread  1  14:52   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
    20896 root        20    0   540M   476M nanslp  1  14:04   0.00% /usr/local/bin/php-cgi -q /usr/local/bin/notify_monitor.php
       12 root       -92    -     0K   432K WAIT    0  11:01   0.00% [intr{irq269: re1}]
        6 root        -8    -     0K   160K tx->tx  0   9:30   0.00% [zfskern{txg_thread_enter}]
    

    Thank you,
    Ashima


  • Netgate Administrator

    @ashima said in pfSense memory usage:

    20896 root 20 0 540M 476M nanslp 1 14:04 0.00% /usr/local/bin/php-cgi -q /usr/local/bin/notify_monitor.php

    That looks odd.

    But you also have ntopng running there consuming a lot.

    Run at the command line top -aS then press 'o' and enter 'res'.

    Steve


  • LAYER 8

    @stephenw10 I am using GUI command line. I am using openvpn to login in these remote machines.Can I use top -aS and press o with GUI.


  • Netgate Administrator

    You can use top -aS -o res from the gui to get a single snapshot.


  • LAYER 8

    Here's the output :

    last pid: 99113;  load averages:  0.51,  0.36,  0.33  up 35+13:21:14    22:35:49
    172 processes: 3 running, 168 sleeping, 1 waiting
    
    Mem: 84M Active, 620M Inact, 478M Laundry, 2045M Wired, 204K Buf, 587M Free
    ARC: 1544M Total, 644M MFU, 812M MRU, 4788K Anon, 12M Header, 71M Other
         1322M Compressed, 4466M Uncompressed, 3.38:1 Ratio
    Swap: 4096M Total, 15M Used, 4081M Free
    
    
      PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
    20896 root          1  20    0   540M   477M nanslp  1  14:05   0.00% /usr/local/bin/php-cgi -q /usr/local/bin/notify_monitor.php
    56232 squid         1  20    0   434M   367M kqread  0 112:08   0.00% (squid-1) -f /usr/local/etc/squid/squid.conf (squid)
    86720 unbound       2  20    0 81016K 67500K kqread  1  34:17   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf
      345 root          1  22    0 92880K 26292K piperd  0   0:23   0.88% php-fpm: pool nginx (php-fpm)
    98064 root          1  92   20 90312K 24652K CPU0    0   0:00   0.00% /usr/local/bin/php-cgi -q /usr/local/bin/captiveportal_gather_stats.php commonuser loggedin
      346 root          1  52    0 88396K 23400K accept  0   0:22   0.00% php-fpm: pool nginx (php-fpm)
    75482 root          1  52    0 92752K 23064K accept  1   0:16   0.00% php-fpm: pool nginx (php-fpm)
    94115 squid         1  20    0 21880K 15724K sbwait  1   0:09   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
    94285 squid         1  20    0 21880K 15644K sbwait  0   0:01   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
    94550 squid         1  20    0 21880K 15344K sbwait  0   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
    94791 squid         1  20    0 21880K 15132K sbwait  1   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
     6350 squid         1  20    0 21880K 14912K sbwait  1   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
     6595 squid         1  20    0 19832K 14780K sbwait  0   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
      344 root          1  20    0 88264K 14768K kqread  0   1:29   0.00% php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-fpm)
     6729 squid         1  20    0 17784K 13092K sbwait  0   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
    17356 root          1  20    0 12908K 13012K select  1   5:08   0.00% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid
    94399 squid         1  28    0 17784K 12696K sbwait  1   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
     7049 squid         1  20    0 17784K 12696K sbwait  1   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
    

    I manually killed the ntopng process.


  • Netgate Administrator

    You definitely have an issue with notify_monitor.php there. Something it is calling is using a load of ram.

    Try running ps -auxwd to what is actually using that.

    Steve


  • LAYER 8

    @stephenw10 Here's the output. The notify_monitor is using 12% of memory

    USER      PID  %CPU %MEM    VSZ    RSS TT  STAT STARTED        TIME COMMAND
    root        0   0.0  0.1      0   4320  -  DLs  22Aug19   747:47.00 [kernel]
    root       11 200.0  0.0      0     32  -  RNL  22Aug19 98756:31.42 - [idle]
    root        1   0.0  0.0   5020    260  -  ILs  22Aug19     0:04.54 - /sbin/init --
    root      344   0.0  0.4  88264  14768  -  Ss   22Aug19     1:29.02 |-- php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-
    root      345   0.6  0.7  92880  26292  -  S    22Aug19     0:22.95 | |-- php-fpm: pool nginx (php-fpm)
    root    44669   0.0  0.1   6808   2888  -  R    22:53       0:00.00 | | `-- ps -auxwd
    root      346   0.0  0.6  88396  23400  -  I    22Aug19     0:21.87 | |-- php-fpm: pool nginx (php-fpm)
    root    75482   0.0  0.6  92752  23064  -  I    22Aug19     0:16.08 | `-- php-fpm: pool nginx (php-fpm)
    root      359   0.0  0.1   9240   3788  -  INs  22Aug19     0:00.03 |-- /usr/local/sbin/check_reload_status
    root      361   0.0  0.0   9240      0  -  IWN  -           0:00.00 | `-- check_reload_status: Monitoring daemon of check_reload_st
    root      407   0.0  0.0   9184    572  -  Is   22Aug19     0:01.34 |-- /sbin/devd -q -f /etc/pfSense-devd.conf
    root     8700   0.0  0.0   6448   1876  -  Is   22Aug19     0:00.00 |-- dhclient: re0 [priv] (dhclient)
    root    15918   0.0  0.0  21544      0  -  IWs  -           0:00.00 |-- nginx: master process /usr/local/sbin/nginx -c /var/etc/ngi
    root    15999   0.0  0.2  23592   6824  -  S    22Aug19     0:00.89 | |-- nginx: worker process (nginx)
    root    16260   0.0  0.1  23592   5876  -  S    22Aug19     0:00.90 | `-- nginx: worker process (nginx)
    _dhcp   16079   0.0  0.0   6448   1748  -  ICs  22Aug19     0:00.05 |-- dhclient: re0 (dhclient)
    root    16703   0.0  0.0   6368   1224  -  Is   22Aug19     0:18.43 |-- /usr/sbin/cron -s
    root    17356   0.0  0.3  12908  13012  -  Ss   22Aug19     5:08.49 |-- /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/n
    root    20569   0.0  0.1   6392   2496  -  Ss   22Aug19     0:36.76 |-- /usr/sbin/syslogd -s -c -c -l /var/dhcpd/var/run/log -P /va
    root    41017   0.0  0.0   6968      0  -  IWs  -           0:00.00 | `-- /bin/sh /usr/local/sbin/sshguard
    root    41644   0.0  0.0   6196   2004  -  I    22Aug19     0:00.00 |   |-- cat
    root    41668   0.0  0.1  11612   4184  -  IC   22Aug19     0:00.01 |   |-- /usr/local/libexec/sshg-parser
    root    41690   0.0  0.1   6524   2440  -  IC   22Aug19     0:02.50 |   |-- /usr/local/libexec/sshg-blocker -s 3600
    root    41913   0.0  0.0   6968      0  -  IW   -           0:00.00 |   `-- /bin/sh /usr/local/sbin/sshguard
    root    42212   0.0  0.1   6968   2388  -  I    22Aug19     0:00.00 |     `-- /bin/sh /usr/local/libexec/sshg-fw-pf
    root    21575   0.0  0.0   6192      0  -  IWs  -           0:00.00 |-- /usr/local/bin/minicron 240 /var/run/ping_hosts.pid /usr/lo
    root    21785   0.0  0.0   6192    368  -  I    22Aug19     0:01.04 | `-- minicron: helper /usr/local/bin/ping_hosts.sh  (minicron)
    root    21791   0.0  0.0   6192      0  -  IWs  -           0:00.00 |-- /usr/local/bin/minicron 3600 /var/run/expire_accounts.pid /
    root    22379   0.0  0.0   6192    368  -  I    22Aug19     0:00.07 | `-- minicron: helper /usr/local/sbin/fcgicli -f /etc/rc.expir
    root    22476   0.0  0.0   6192      0  -  IWs  -           0:00.00 |-- /usr/local/bin/minicron 86400 /var/run/update_alias_url_dat
    root    22782   0.0  0.0   6192    368  -  I    22Aug19     0:00.00 | `-- minicron: helper /usr/local/sbin/fcgicli -f /etc/rc.updat
    dhcpd   24961   0.0  0.1  12576   5720  -  Ss   22Aug19     4:28.58 |-- /usr/local/sbin/dhcpd -user dhcpd -group _dhcp -chroot /var
    root    35643   0.0  0.1  10212   5752  -  Ss   22Aug19     0:19.39 |-- /usr/local/sbin/openvpn --config /var/etc/openvpn/server1.c
    root    50538   0.0  0.1  13196   3048  -  Is   22Aug19     0:13.18 |-- /usr/local/sbin/filterdns -p /var/run/filterdns-commonuser-
    root    51621   0.0  0.0  21544      0  -  IWs  -           0:00.00 |-- nginx: master process /usr/local/sbin/nginx -c /var/etc/ngi
    root    51942   0.0  0.0  21544     16  -  I    22Aug19     0:00.03 | |-- nginx: worker process (nginx)
    root    52111   0.0  0.0  21544     16  -  I    22Aug19     0:00.03 | |-- nginx: worker process (nginx)
    root    52250   0.0  0.1  23592   2192  -  I    22Aug19     0:00.29 | |-- nginx: worker process (nginx)
    root    52426   0.0  0.1  23592   2080  -  I    22Aug19     0:00.03 | |-- nginx: worker process (nginx)
    root    52565   0.0  0.1  23592   2144  -  I    22Aug19     0:00.81 | |-- nginx: worker process (nginx)
    root    52733   0.0  0.1  23592   2208  -  I    22Aug19     0:02.50 | `-- nginx: worker process (nginx)
    root    52922   0.0  0.0   6192      0  -  IWs  -           0:00.00 |-- /usr/local/bin/minicron 60 /var/run/cp_prunedb_commonuser.p
    root    53296   0.0  0.0   6192    368  -  S    22Aug19     0:04.41 | `-- minicron: helper /etc/rc.prunecaptiveportal commonuser (m
    root    55816   0.0  0.0  26228      0  -  IWs  -           0:00.00 |-- /usr/local/sbin/squid -f /usr/local/etc/squid/squid.conf
    squid   56232   0.0  9.4 444000 376296  -  S    22Aug19   112:08.39 | `-- (squid-1) -f /usr/local/etc/squid/squid.conf (squid)
    squid    1565   0.0  0.1   9948   3840  -  S    26Aug19     2:40.44 |   |-- (pinger) (pinger)
    squid    6350   0.0  0.4  21880  14912  -  I    07:05       0:00.08 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid    6595   0.0  0.4  19832  14780  -  I    07:05       0:00.07 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid    6729   0.0  0.3  17784  13092  -  I    07:05       0:00.07 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid    7049   0.0  0.3  17784  12696  -  I    07:05       0:00.06 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid    8820   0.0  0.1   9948   3840  -  S    27Aug19     2:31.20 |   |-- (pinger) (pinger)
    squid    9350   0.0  0.1   9948   3840  -  S    15Sep19     0:57.98 |   |-- (pinger) (pinger)
    squid   10668   0.0  0.1   9948   3844  -  S    16Sep19     1:00.51 |   |-- (pinger) (pinger)
    squid   11866   0.0  0.1   9948   3840  -  S     5Sep19     1:48.65 |   |-- (pinger) (pinger)
    squid   14811   0.0  0.1   9948   3840  -  S     2Sep19     2:04.27 |   |-- (pinger) (pinger)
    squid   22151   0.0  0.1   9948   4512  -  S    Sun00       0:31.95 |   |-- (pinger) (pinger)
    squid   22708   0.0  0.1   9948   4512  -  S    Tue00       0:21.07 |   |-- (pinger) (pinger)
    squid   22880   0.0  0.1   9948   3840  -  S     5Sep19     1:44.84 |   |-- (pinger) (pinger)
    squid   26394   0.0  0.1   9948   4516  -  S    Wed06       0:10.28 |   |-- (pinger) (pinger)
    squid   26795   0.0  0.1   9948   3840  -  S    29Aug19     2:26.29 |   |-- (pinger) (pinger)
    squid   28465   0.0  0.1   9948   3840  -  S    10Sep19     1:27.03 |   |-- (pinger) (pinger)
    squid   32205   0.0  0.1   9948   3844  -  S    17Sep19     0:49.34 |   |-- (pinger) (pinger)
    squid   33081   0.0  0.1   9948   3840  -  S     8Sep19     1:35.38 |   |-- (pinger) (pinger)
    squid   34205   0.0  0.1   9948   3840  -  S    30Aug19     2:21.59 |   |-- (pinger) (pinger)
    squid   36124   0.0  0.1   9948   4512  -  S    Mon00       0:27.42 |   |-- (pinger) (pinger)
    squid   36524   0.0  0.1   9948   3840  -  S     4Sep19     1:56.02 |   |-- (pinger) (pinger)
    squid   37538   0.0  0.1   9948   3840  -  S     3Sep19     2:00.34 |   |-- (pinger) (pinger)
    squid   38958   0.0  0.1   9948   3840  -  S    13Sep19     1:06.98 |   |-- (pinger) (pinger)
    squid   39948   0.0  0.1   9948   3848  -  S    19Sep19     0:37.90 |   |-- (pinger) (pinger)
    squid   43575   0.0  0.1   9948   3840  -  S     2Sep19     2:00.49 |   |-- (pinger) (pinger)
    squid   44734   0.0  0.1   9948   3840  -  S    28Aug19     2:32.25 |   |-- (pinger) (pinger)
    squid   45480   0.0  0.1   9948   3844  -  S    15Sep19     0:57.26 |   |-- (pinger) (pinger)
    squid   46420   0.0  0.1   9948   3840  -  S     6Sep19     1:37.58 |   |-- (pinger) (pinger)
    squid   46455   0.0  0.1   9948   3840  -  S    15Sep19     1:00.20 |   |-- (pinger) (pinger)
    squid   51962   0.0  0.1   9948   3840  -  S     9Sep19     1:28.64 |   |-- (pinger) (pinger)
    squid   56649   0.0  0.1   9948   3840  -  S     1Sep19     2:10.41 |   |-- (pinger) (pinger)
    squid   57637   0.0  0.1   9948   3840  -  S     9Sep19     1:24.50 |   |-- (pinger) (pinger)
    squid   58202   0.0  0.1   9948   3844  -  S    18Sep19     0:50.83 |   |-- (pinger) (pinger)
    squid   59306   0.0  0.1   9948   3840  -  S     8Sep19     1:22.95 |   |-- (pinger) (pinger)
    squid   59702   0.0  0.1   9948   3848  -  S    19Sep19     0:42.96 |   |-- (pinger) (pinger)
    squid   62271   0.0  0.1   9948   3844  -  S    17Sep19     0:54.13 |   |-- (pinger) (pinger)
    squid   62549   0.0  0.1   9824   3588  -  S    22Aug19     0:12.35 |   |-- (unlinkd) (unlinkd)
    squid   62633   0.0  0.1   9948   3840  -  S    22Aug19     2:50.71 |   |-- (pinger) (pinger)
    squid   63420   0.0  0.1   9948   3840  -  S    24Aug19     2:42.39 |   |-- (pinger) (pinger)
    squid   65608   0.0  0.1   9948   3840  -  S     7Sep19     1:42.67 |   |-- (pinger) (pinger)
    squid   67062   0.0  0.2  11352   7704  -  S    Wed06       0:20.57 |   |-- (ssl_crtd) -s /var/squid/lib/ssl_db -M 4MB -b 2048 (ssl
    squid   67313   0.0  0.2  11352   7524  -  I    Wed06       0:01.05 |   |-- (ssl_crtd) -s /var/squid/lib/ssl_db -M 4MB -b 2048 (ssl
    squid   67368   0.0  0.2  11352   7464  -  I    Wed06       0:00.19 |   |-- (ssl_crtd) -s /var/squid/lib/ssl_db -M 4MB -b 2048 (ssl
    squid   67563   0.0  0.2  11352   7460  -  I    Wed06       0:00.09 |   |-- (ssl_crtd) -s /var/squid/lib/ssl_db -M 4MB -b 2048 (ssl
    squid   67612   0.0  0.2  11352   7504  -  I    Wed06       0:00.05 |   |-- (ssl_crtd) -s /var/squid/lib/ssl_db -M 4MB -b 2048 (ssl
    squid   68043   0.0  0.1   9948   3840  -  S    24Aug19     2:46.88 |   |-- (pinger) (pinger)
    squid   69071   0.0  0.1   9948   4516  -  S    Wed06       0:13.93 |   |-- (pinger) (pinger)
    squid   69080   0.0  0.1   9948   3840  -  S    15Sep19     0:56.17 |   |-- (pinger) (pinger)
    squid   69693   0.0  0.1   9948   3840  -  S    13Sep19     1:10.54 |   |-- (pinger) (pinger)
    squid   70220   0.0  0.1   9948   3848  -  S    19Sep19     0:41.58 |   |-- (pinger) (pinger)
    squid   72973   0.0  0.1   9948   3840  -  S    27Aug19     2:25.25 |   |-- (pinger) (pinger)
    squid   75906   0.0  0.1   9948   3844  -  S    15Sep19     0:58.36 |   |-- (pinger) (pinger)
    squid   76657   0.0  0.1   9948   3844  -  S    17Sep19     0:50.78 |   |-- (pinger) (pinger)
    squid   78114   0.0  0.1   9948   3840  -  S    25Aug19     2:46.93 |   |-- (pinger) (pinger)
    squid   78733   0.0  0.1   9948   3840  -  S     6Sep19     1:42.55 |   |-- (pinger) (pinger)
    squid   81980   0.0  0.1   9948   3840  -  S     2Sep19     1:59.23 |   |-- (pinger) (pinger)
    squid   82678   0.0  0.1   9948   3840  -  S    23Aug19     2:55.61 |   |-- (pinger) (pinger)
    squid   83495   0.0  0.1   9948   4512  -  S    Sat00       0:37.03 |   |-- (pinger) (pinger)
    squid   86022   0.0  0.1   9948   3840  -  S    13Sep19     1:11.31 |   |-- (pinger) (pinger)
    squid   87194   0.0  0.1   9948   3840  -  S    22Aug19     2:53.21 |   |-- (pinger) (pinger)
    squid   88171   0.0  0.1   9948   3840  -  S    11Sep19     1:23.44 |   |-- (pinger) (pinger)
    squid   88502   0.0  0.1   9948   3840  -  S     6Sep19     1:43.69 |   |-- (pinger) (pinger)
    squid   88929   0.0  0.1   9948   3840  -  S    14Sep19     1:08.11 |   |-- (pinger) (pinger)
    squid   90107   0.0  0.1   9948   3840  -  S    12Sep19     1:17.78 |   |-- (pinger) (pinger)
    squid   92240   0.0  0.1   9948   3840  -  S     9Sep19     1:27.19 |   |-- (pinger) (pinger)
    squid   92322   0.0  0.1   9948   4512  -  S    Fri00       0:42.32 |   |-- (pinger) (pinger)
    squid   92795   0.0  0.1   9948   3840  -  S     8Sep19     1:24.35 |   |-- (pinger) (pinger)
    squid   93941   0.0  0.1   9948   4516  -  S    00:00       0:10.58 |   |-- (pinger) (pinger)
    squid   94115   0.0  0.4  21880  15724  -  S    00:00       0:08.87 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid   94285   0.0  0.4  21880  15644  -  I    00:00       0:00.66 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid   94399   0.0  0.3  17784  12696  -  I    15:01       0:00.06 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid   94550   0.0  0.4  21880  15344  -  I    00:00       0:00.19 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid   94644   0.0  0.3  17784  11520  -  I    15:01       0:00.06 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid   94704   0.0  0.1   9948   3840  -  S     5Sep19     1:43.31 |   |-- (pinger) (pinger)
    squid   94791   0.0  0.4  21880  15132  -  I    00:00       0:00.08 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid   94902   0.0  0.3  17784  11520  -  I    15:01       0:00.06 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid   95092   0.0  0.3  17784  11520  -  I    15:01       0:00.04 |   |-- (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.co
    squid   96049   0.0  0.1   9948   4512  -  S    Wed00       0:12.00 |   |-- (pinger) (pinger)
    squid   96400   0.0  0.1   9948   3840  -  S    24Aug19     2:45.10 |   |-- (pinger) (pinger)
    squid   98054   0.0  0.1   9948   3840  -  S    31Aug19     2:18.20 |   |-- (pinger) (pinger)
    squid   99900   0.0  0.1   9948   3840  -  S    27Aug19     2:27.31 |   `-- (pinger) (pinger)
    root    56473   0.0  0.1   6600   2068  -  Ss   22Aug19     1:18.85 |-- /usr/local/sbin/filterlog -i pflog0 -p /var/run/filterlog.p
    root    65353   0.0  0.1   6968   2388  -  I    Wed06       0:01.20 |-- /bin/sh /usr/local/pkg/sqpmon.sh
    root     9081   0.0  0.0   4144   1952  -  IC   22:53       0:00.00 | `-- sleep 55
    root    70260   0.0  0.1   6900   2120  -  Is   22Aug19     8:40.21 |-- /usr/local/bin/dpinger -S -r 0 -i WAN_DHCP -B 172.16.8.2 -p
    root    70673   0.0  0.1  10996   2224  -  Is   22Aug19     8:31.92 |-- /usr/local/bin/dpinger -S -r 0 -i CiscoGW -B 192.168.0.250
    root    72592   0.0  0.2  10200   6580  -  Ss   Wed06       0:22.97 |-- /usr/local/sbin/openvpn --config /var/etc/openvpn/client2.c
    root    84867   0.0  0.1  10220   4992  -  S    Wed06       1:45.39 |-- redis-server: /usr/local/bin/redis-server 127.0.0.1:6379 (r
    unbound 86720   0.0  1.7  81016  67500  -  Ss   22Aug19    34:17.05 |-- /usr/local/sbin/unbound -c /var/unbound/unbound.conf
    root    20896   0.0 12.1 553196 488316 v0- S    22Aug19    14:04.94 |-- /usr/local/bin/php-cgi -q /usr/local/bin/notify_monitor.php
    root    39160   0.0  0.0   6720      0 v0  IWs  -           0:00.00 |-- login [pam] (login)
    root    41323   0.0  0.0   6968      0 v0  IW   -           0:00.00 | `-- -sh (sh)
    root    42500   0.0  0.1   6968   2388 v0  I+   22Aug19     0:00.00 |   `-- /bin/sh /etc/rc.initial
    root    82478   0.0  0.0   6968   1024 v0- IN   22Aug19     7:20.67 |-- /bin/sh /var/db/rrd/updaterrd.sh
    root    43316   0.0  0.0   4144   1952  -  INC  22:53       0:00.00 | `-- sleep 60
    root    39450   0.0  0.1   6312   2076 v1  Is+  22Aug19     0:00.00 |-- /usr/libexec/getty Pc ttyv1
    root    39627   0.0  0.1   6312   2076 v2  Is+  22Aug19     0:00.00 |-- /usr/libexec/getty Pc ttyv2
    root    39944   0.0  0.1   6312   2076 v3  Is+  22Aug19     0:00.00 |-- /usr/libexec/getty Pc ttyv3
    root    39987   0.0  0.1   6312   2076 v4  Is+  22Aug19     0:00.00 |-- /usr/libexec/getty Pc ttyv4
    root    40291   0.0  0.1   6312   2076 v5  Is+  22Aug19     0:00.00 |-- /usr/libexec/getty Pc ttyv5
    root    40597   0.0  0.1   6312   2076 v6  Is+  22Aug19     0:00.00 |-- /usr/libexec/getty Pc ttyv6
    root    40692   0.0  0.1   6312   2076 v7  Is+  22Aug19     0:00.00 `-- /usr/libexec/getty Pc ttyv7
    root        2   0.0  0.0      0     16  -  DL   22Aug19     0:00.00 - [crypto]
    root        3   0.0  0.0      0     16  -  DL   22Aug19     0:00.00 - [crypto returns 0]
    root        4   0.0  0.0      0     16  -  DL   22Aug19     0:00.00 - [crypto returns 1]
    root        5   0.0  0.0      0     32  -  DL   22Aug19     0:00.00 - [cam]
    root        6   0.0  0.0      0    160  -  DL   22Aug19    11:53.83 - [zfskern]
    root        7   0.0  0.0      0     16  -  DL   22Aug19     0:00.52 - [soaiod1]
    root        8   0.0  0.0      0     16  -  DL   22Aug19     0:00.51 - [soaiod2]
    root        9   0.0  0.0      0     16  -  DL   22Aug19     0:00.51 - [soaiod3]
    root       10   0.0  0.0      0     16  -  DL   22Aug19     0:00.00 - [audit]
    root       12   0.0  0.0      0    432  -  WL   22Aug19   224:29.95 - [intr]
    root       13   0.0  0.0      0     32  -  DL   22Aug19     0:00.00 - [ng_queue]
    root       14   0.0  0.0      0     48  -  DL   22Aug19     0:00.01 - [geom]
    root       15   0.0  0.0      0    400  -  DL   22Aug19     1:13.59 - [usb]
    root       16   0.0  0.0      0     16  -  DL   22Aug19     0:00.50 - [soaiod4]
    root       17   0.0  0.0      0     16  -  DL   22Aug19     0:00.00 - [sctp_iterator]
    root       18   0.0  0.0      0     16  -  DL   22Aug19    19:55.03 - [pf purge]
    root       19   0.0  0.0      0     16  -  DL   22Aug19     4:13.50 - [rand_harvestq]
    root       20   0.0  0.0      0     48  -  DL   22Aug19     2:13.47 - [pagedaemon]
    root       21   0.0  0.0      0     16  -  DL   22Aug19     0:00.10 - [vmdaemon]
    root       22   0.0  0.0      0     16  -  DNL  22Aug19     0:00.05 - [pagezero]
    root       23   0.0  0.0      0     16  -  DL   22Aug19     0:17.20 - [bufdaemon]
    root       24   0.0  0.0      0     16  -  DL   22Aug19     0:16.23 - [bufspacedaemon]
    root       25   0.0  0.0      0     16  -  DL   22Aug19     1:15.56 - [syncer]
    root       26   0.0  0.0      0     16  -  DL   22Aug19     0:17.89 - [vnlru]
    root       68   0.0  0.0      0     16  -  DL   22Aug19     0:03.02 - [md0]
    

  • LAYER 8

    I forgot to mention: I am also using Service Watchdog and Freeradius. The former keeps a watch on the latter and notifies by sending a mail. Are these the culprit ?


  • Netgate Administrator

    Hmm, nothing shown, I was hoping to see processes it generated or was generated from.

    That should not be using anything like that. Do you see any related errors in the system log? Is it actually sending notifications?

    This seems unrelated to the OPs issue though. It should probably be in a new thread.

    Steve


  • LAYER 8

    Thanks @stephenw10 for the response. I am sorry @marcvw for hijacking this thread.

    I guess "sending notification" is the culprit. I am using a dummy Gmail account for all notifications from different sites. When I checked the log in between it shows this message :

    Error: Failed to send data [SMTP: Invalid response code received from server (code: 550, response: 5.4.5 Daily user sending quota exceeded. f188sm5666858pfa.170 - gsmtp)]

    May be this is causing the issue. If that is not the case, please let me know. I'll start a new thread and we can continue our discussion.

    Regards,
    Ashima



  • @ashima said in pfSense memory usage:

    Error: Failed to send data [SMTP: Invalid response code received from server (code: 550, response: 5.4.5 Daily user sending quota exceeded. f188sm5666858pfa.170 - gsmtp)]

    Gmail said that ??
    Never saw that before. We're talking an awful lot of mails here. You should definitely investigate that.

    Btw : 'watchdog' FreeRadius ? It never shut down on me.
    When you think you need watchdog package, you actually need to sort out the problem that makes a process crash.


  • LAYER 8

    @Gertjan @stephenw10
    The discussion is getting interesting. I am starting a new thread,

    https://forum.netgate.com/topic/146882/pfsense-memory-usage-part-2

    Regards,
    Ashima


Log in to reply