• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

pfSense memory usage

Scheduled Pinned Locked Moved General pfSense Questions
27 Posts 4 Posters 7.8k Views
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M
    marcvw
    last edited by marcvw Jul 18, 2019, 10:40 AM Jul 1, 2019, 1:06 PM

    We do use a pfSense stack with 2 members, running both on 2.4.4-RELEASE-p3, virtual, running on ESX 6.something. This setup uses CARP to replicate/provide failover. Both are running fine but the backup/secondary node shows much higher memory usage than the primary member. We do not have any issues in performance or availability at this moment but I am wondering what causes this usage. If I restart the node, memory usage will be lower but slowly growing up to about 60% and higher after about 20-25 days.

    dashboard
    stats
    system_activity

    Both nodes are running the same release, have been upgraded from 2.2.something and have the same services/packages and hardware specs.

    2.4.4-RELEASE-p3 (amd64)
    built on Wed May 15 18:53:44 EDT 2019
    FreeBSD 11.2-RELEASE-p10

    List of installed packages:

    • cron
    • haproxy
    • nmap
    • open-vm-tools
    • openvpn-client-export
    • sudo

    Current states:
    State Table Total Rate
    current entries 797
    searches 260031939 163.6/s

    Aliases: about 30 (groups)
    Firewall rules: 5 zones, about 80 rules in total

    Found a lot of related info but none with a solution...
    https://forum.netgate.com/topic/130622/is-high-memory-usage-normal
    https://forum.netgate.com/topic/50032/high-memory-usage/9
    https://forum.netgate.com/topic/61420/memory-leak
    https://forum.netgate.com/topic/4667/possible-memory-leak/5
    https://www.reddit.com/r/PFSENSE/comments/bg1ogg/wired_memory_slowly_creeping_up/
    https://redmine.pfsense.org/issues/8249
    https://forum.netgate.com/topic/47513/memory-usage-climbing/9
    https://redmine.pfsense.org/issues/2819
    https://forum.netgate.com/topic/130396/wire-memory-slowly-increasing/10

    Can someone help me how to find the cause, we do use snmp monitoring, what brought me to this question. The memload does not get lower when I restart the (running) services, FPM-service or webconfigurator.

    1 Reply Last reply Reply Quote 0
    • S
      stephenw10 Netgate Administrator
      last edited by Jul 2, 2019, 1:25 PM

      Does it just continue to grow if you don't restart the node?

      And the primary does not show that?

      Try running top at the command line instead of using Diag > System Activity. Then sort by size instead of cpu usage.
      Compare that output on both nodes.

      Steve

      1 Reply Last reply Reply Quote 0
      • M
        marcvw
        last edited by Jul 2, 2019, 1:50 PM

        Hi, thanks for your reply. Current status on dashboard is "62% of 2000 MiB" so it is slightly growing. Primary node doesn't.
        as requested, top -aSH , sorted by size:

        ===Backup appliance

        last pid:  2448;  load averages:  0.32,  0.28,  0.26                                                                                                                          up 19+10:23:42  15:39:42
        204 processes: 3 running, 154 sleeping, 47 waiting
        CPU:  0.2% user,  0.0% nice,  0.0% system,  0.0% interrupt, 99.8% idle
        Mem: 27M Active, 433M Inact, 1170M Wired, 198M Buf, 317M Free
        Swap: 4096M Total, 4096M Free
        
          PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
        34541 root           20    0 99820K 39164K accept  1   0:21   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
        34864 root           52    0 99820K 39128K accept  0   0:21   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
        47659 root           20    0 97772K 38560K accept  0   0:21   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
        74679 root           52    0 97772K 38288K accept  0   0:00   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
        30427 root           20    0 88384K 28712K kqread  1   0:03   0.00% php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-fpm)
        37340 root           20    0 67592K 61624K select  1   4:05   0.00% /usr/sbin/bsnmpd -c /var/etc/snmpd.conf -p /var/run/snmpd.pid
        52219 root           20    0 48060K 43496K select  0  14:14   0.04% /usr/local/bin/vmtoolsd -c /usr/local/share/vmware-tools/tools.conf -p /usr/local/lib/open-vm-tools/plugins/vmsvc
        67851 unbound        20    0 38172K 18752K kqread  1   0:00   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
        67851 unbound        20    0 38172K 18752K kqread  1   0:00   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
        43029 root           20    0 37904K 18280K select  0   0:01   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           20    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           31    0 37904K 18280K sigwai  0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           31    0 37904K 18280K uwait   1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           31    0 37904K 18280K select  1   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        43029 root           31    0 37904K 18280K uwait   0   0:00   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        94154 root           20    0 23596K 10304K kqread  1   0:10   0.00% nginx: worker process (nginx)
        93939 root           20    0 23596K  9620K kqread  0   0:06   0.00% nginx: worker process (nginx)
        93731 root           52    0 21548K  7752K pause   1   0:00   0.00% nginx: master process /usr/local/sbin/nginx -c /var/etc/nginx-webConfigurator.conf (nginx)
        27209 root           20    0 12908K  9488K select  0   0:00   0.02% sshd: admin@pts/2 (sshd)
        14001 root           20    0 12616K  8824K select  1   0:00   0.00% /usr/sbin/sshd
        17482 root           20    0 12400K 12504K select  0   0:07   0.01% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd}
        86276 root           20    0 11912K  7772K piperd  0   0:00   0.00% /usr/local/libexec/sshg-parser
        26928 root           20    0 10216K  6516K select  1   0:10   0.00% /usr/local/sbin/openvpn --config /var/et
        

        ===

        ===Master appliance

        last pid: 69934;  load averages:  0.10,  0.14,  0.15                                                                                                                          up 19+10:13:34  15:40:24
        210 processes: 3 running, 160 sleeping, 47 waiting
        CPU:  0.2% user,  0.0% nice,  0.2% system,  0.2% interrupt, 99.4% idle
        Mem: 72M Active, 420M Inact, 373M Wired, 198M Buf, 1082M Free
        Swap: 4096M Total, 4096M Free
        
          PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
        13320 root           52    0 99820K 40504K accept  0   0:09   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
        72977 root           52    0 99820K 40488K accept  0   0:10   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
         5805 root           52    0 99820K 40368K accept  0   0:04   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
        72353 root           20    0 97772K 39788K accept  0   0:07   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
        63653 root           24    0 97772K 39604K accept  0   0:06   0.00% php-fpm: pool nginx (php-fpm){php-fpm}
          709 root           20    0 88384K 26292K kqread  0   0:41   0.00% php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-fpm)
        44910 root           20    0 67592K 62272K select  1  33:51   0.01% /usr/sbin/bsnmpd -c /var/etc/snmpd.conf -p /var/run/snmpd.pid
        63513 root           20    0 48060K 43420K select  0  14:29   0.04% /usr/local/bin/vmtoolsd -c /usr/local/share/vmware-tools/tools.conf -p /usr/local/lib/open-vm-tools/plugins/vmsvc
         4081 unbound        20    0 38172K 18780K kqread  1   0:01   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
         4081 unbound        20    0 38172K 18780K kqread  1   0:01   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
        42282 root           20    0 37904K 21536K uwait   1   0:07   0.01% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K select  1   0:10   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   1   0:23   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   1   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K uwait   0   0:07   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           47    0 37904K 21536K sigwai  1   0:02   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        42282 root           20    0 37904K 21536K select  1   0:01   0.00% /usr/local/libexec/ipsec/charon --use-syslog{charon}
        45159 www            20    0 30436K 21912K kqread  1  78:33   0.61% /usr/local/sbin/haproxy -f /var/etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf 15930
         3197 root           20    0 23596K  9456K kqread  0   0:11   0.00% nginx: worker process (nginx)
         2977 root           20    0 23596K  9428K kqread  0   0:04   0.00% nginx: worker process (nginx)
         2816 root           52    0 21548K  7692K pause   1   0:00   0.00% nginx: master process /usr/local/sbin/nginx -c /var/etc/nginx-webConfigurator.conf (nginx)
         9925 root           20    0 12908K  9456K select  1   0:00   0.01% sshd: admin@pts/0 (sshd)
        14363 root           20    0 12616K  8804K select  0   0:14   0.00% /usr/sbin/sshd
         7964 root           20    0 12400K 12504K select  0   1:47   0.00% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd}
        98280 root           20    0 11912K  7772K piperd  0   0:04   0.00% /usr/local/libexec/sshg-parser
        

        ===

        1 Reply Last reply Reply Quote 0
        • S
          stephenw10 Netgate Administrator
          last edited by Jul 2, 2019, 5:34 PM

          Hmm, well it's all wired usage by the looks of it. Nothing looks dramatically incorrect there really.
          I would want to see if that continues to climb if you don't reboot it.

          Steve

          1 Reply Last reply Reply Quote 0
          • M
            marcvw
            last edited by Jul 11, 2019, 9:13 AM

            Memory usage is still growing, today's update:

            ===
            Uptime 28 Days 05 Hours 55 Minutes 19 Seconds
            Memory usage 81% of 2000 MiB

            203 processes: 3 running, 153 sleeping, 47 waiting
            
            Mem: 12M Active, 307M Inact, 1558M Wired, 198M Buf, 71M Free
            Swap: 4096M Total, 4096M Free
            

            ===

            Still no issues found besides our monitoring check that complains about memory usage :-)

            1 Reply Last reply Reply Quote 0
            • M
              marcvw
              last edited by marcvw Jul 18, 2019, 10:39 AM Jul 18, 2019, 10:37 AM

              Another week has passed, host is using 98% of 2000MiB ram now, still alive and nothing useful is being logged, it has started to use swap as well, currently using 6%. Console (ssh) responds a bit slow but still operates.

              I am a bit desperate for the next step, wait for release 2.5, another 2.4.4 patch, reinstall this host, reboot it once a month...

              Primary node is using only 22% of its ram, same uptime.

              1 Reply Last reply Reply Quote 0
              • M
                marcvw
                last edited by Jul 19, 2019, 8:58 AM

                System ran out of swap last night and killed all processes:

                pid 52219 (vmtoolsd), uid 0, was killed: out of swap space
                pid 37340 (bsnmpd), uid 0, was killed: out of swap space
                pid 99704 (netstat), uid 0, was killed: out of swap space
                pid 16857 (php-fpm), uid 0, was killed: out of swap space
                pid 93238 (php-fpm), uid 0, was killed: out of swap space
                pid 55684 (php-fpm), uid 0, was killed: out of swap space
                pid 67420 (php-fpm), uid 0, was killed: out of swap space
                pid 14021 (php-fpm), uid 0, was killed: out of swap space
                pid 4367 (php-fpm), uid 0, was killed: out of swap space
                pid 57915 (php-fpm), uid 0, was killed: out of swap space
                pid 82469 (php-cgi), uid 0, was killed: out of swap space

                WebGui died, unable to start it again, console / SSH was available but very slow on response.
                I had to reboot the VM to get it online again. Memory usage @14% after reboot.

                1 Reply Last reply Reply Quote 0
                • GertjanG
                  Gertjan
                  last edited by Gertjan Jul 19, 2019, 3:07 PM Jul 19, 2019, 3:06 PM

                  @marcvw said in pfSense memory usage:

                  /usr/local/libexec/ipsec/charon

                  I compared your (visible) processes with mine.
                  True, something is eating memory. Up to you to find out what process is doing so.

                  I'm not using this : /usr/local/libexec/ipsec/charon : can you 'kill' ipsec for some time ?

                  I do not use haproxy.

                  Btw : I'm using Munin because memory that should be tracked of time - among many other things.

                  No "help me" PM's please. Use the forum, the community will thank you.
                  Edit : and where are the logs ??

                  1 Reply Last reply Reply Quote 0
                  • M
                    marcvw
                    last edited by Jul 26, 2019, 3:07 PM

                    Thanks, I have stopped the ipsec service (which killed the processes), we need this because we use ipsec/vpn but to keep track on this issue I can stop it for a while. Despite it has been stopped for some time now the usage of wired mem still grows but I don't see a big difference in- or decrease in usage. I will re-check the values on Monday.

                    1 Reply Last reply Reply Quote 0
                    • M
                      marcvw
                      last edited by Aug 5, 2019, 1:51 PM

                      Rechecked the memory usage growth, same as before, enabled the ipsec service again. I will continue monitor and troubleshoot this appliance...

                      1 Reply Last reply Reply Quote 0
                      • S
                        stephenw10 Netgate Administrator
                        last edited by Aug 5, 2019, 2:45 PM

                        So disabling IPSec had no significant effect on the memory usage growth?

                        M 1 Reply Last reply Sep 24, 2019, 9:18 AM Reply Quote 0
                        • M
                          marcvw @stephenw10
                          last edited by Sep 24, 2019, 9:18 AM

                          @stephenw10

                          No, it does still grow. Waiting for pfSense 2.5 / FreeBSD 12 ;-)

                          1 Reply Last reply Reply Quote 0
                          • S
                            stephenw10 Netgate Administrator
                            last edited by Sep 24, 2019, 12:15 PM

                            Try a snapshot if you can. Now is the time to be finding bugs that may still exist there.

                            Steve

                            1 Reply Last reply Reply Quote 0
                            • A
                              ashima LAYER 8
                              last edited by Sep 26, 2019, 1:07 PM

                              Hi Everyone,
                              I guess I am also facing a similar issue at atleast three different sites. All the locations have similar configuration :

                              2.4.4-RELEASE (amd64)
                              built on Thu Sep 20 09:03:12 EDT 2018
                              FreeBSD 11.2-RELEASE-p3

                              Version 2.4.4_3 is available.

                              Packages Installed :

                              Openvpn_client
                              squid
                              squidguard
                              cron
                              sudo
                              mailreport

                              The memory usage is growing in each of these device. A sample output of top -aSH command is as below :

                              last pid: 34046;  load averages: 23.91,  9.50,  5.23  up 216+08:05:06    18:25:32
                              926 processes: 3 running, 903 sleeping, 20 waiting
                              
                              Mem: 149M Active, 498M Inact, 860M Laundry, 1762M Wired, 200K Buf, 91M Free
                              ARC: 1223M Total, 220M MFU, 893M MRU, 33K Anon, 9466K Header, 100M Other
                                   971M Compressed, 1632M Uncompressed, 1.68:1 Ratio
                              Swap: 4096M Total, 737M Used, 3359M Free, 17% Inuse
                              
                              
                                PID USERNAME      PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
                                 11 root          155 ki31     0K    32K RUN     0 5017.4 100.00% [idle{idle: cpu0}]
                                 11 root          155 ki31     0K    32K CPU1    1 5070.2  97.66% [idle{idle: cpu1}]
                              86614 root           21    0 92624K 26100K piperd  0   0:05   0.88% php-fpm: pool nginx (php-fpm){php-fpm}
                              39125 squid          20    0  2442M   986M kqread  1  38.8H   0.10% (squid-1) -f /usr/local/etc/squid/squid.conf (squid)
                                 12 root          -92    -     0K   320K WAIT    1 556:47   0.10% [intr{irq261: re1}]
                                 12 root          -92    -     0K   320K WAIT    0 430:51   0.10% [intr{irq259: re0}]
                                  0 root          -92    -     0K  4304K -       1  50.9H   0.00% [kernel{dummynet}]
                                 12 root          -60    -     0K   320K WAIT    1 495:42   0.00% [intr{swi4: clock (0)}]
                                  0 root          -92    -     0K  4304K -       0 213:13   0.00% [kernel{em0 taskq}]
                                  0 root          -12    -     0K  4304K -       0 136:11   0.00% [kernel{zio_write_issue}]
                                 18 root          -16    -     0K    16K pftm    0 126:00   0.00% [pf purge]
                                 12 root          -72    -     0K   320K WAIT    1  36:47   0.00% [intr{swi1: netisr 0}]
                                  6 root           -8    -     0K   160K tx->tx  1  30:40   0.00% [zfskern{txg_thread_enter}]
                              26882 root           20    0 12908K 13012K select  0  30:22   0.00% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid{ntpd}
                                 19 root          -16    -     0K    16K -       1  26:51   0.00% [rand_harvestq]
                                 20 root          -16    -     0K    48K psleep  0  23:05   0.00% [pagedaemon{dom0}]
                              73517 squid          20    0  9948K  2740K select  0  18:55   0.00% (pinger) (pinger)
                              46320 squid          20    0  9948K  2740K select  1  18:39   0.00% (pinger) (pinger)
                              

                              Any Pointers,
                              Regards,
                              Ashima

                              1 Reply Last reply Reply Quote 0
                              • S
                                stephenw10 Netgate Administrator
                                last edited by Sep 26, 2019, 3:07 PM

                                You need to sort that output by 'RES' to see what's actually using it but we can see Squid is using close to 1G so that probably needs tuning.

                                Steve

                                A 1 Reply Last reply Sep 26, 2019, 4:14 PM Reply Quote 0
                                • A
                                  ashima LAYER 8 @stephenw10
                                  last edited by Sep 26, 2019, 4:14 PM

                                  Thanks @stephenw10 for pointing that.

                                  Now the memory usage reached 98% so I just rebooted the machine. It's reduced to 21%. Btw I have 4GB RAM.

                                  Can you have a look at the top output at 2nd location. (I don't know how to sort with RES ). Please note in this machine squid is using 367M. The squid configuration is exactly same in all these locations. Only difference between two locations is difference in Uptime days. (former was ~200 days and latter is ~35 days)

                                  
                                  ```  last pid: 87708;  load averages:  0.73,  1.01,  0.87  up 35+12:14:17    21:28:52
                                  556 processes: 3 running, 526 sleeping, 27 waiting
                                  
                                  Mem: 119M Active, 819M Inact, 478M Laundry, 2056M Wired, 204K Buf, 342M Free
                                  ARC: 1547M Total, 643M MFU, 817M MRU, 3920K Anon, 12M Header, 71M Other
                                       1323M Compressed, 4466M Uncompressed, 3.38:1 Ratio
                                  Swap: 4096M Total, 15M Used, 4081M Free
                                  
                                  
                                    PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
                                     11 root       155 ki31     0K    32K CPU0    0 818.6H  97.46% [idle{idle: cpu0}]
                                     11 root       155 ki31     0K    32K RUN     1 824.4H  92.58% [idle{idle: cpu1}]
                                  85280 root        20    0   294M   258M nanslp  1  21:16   0.88% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i em1 -i em1.101 -i em1.201 -i em1.103 --dns
                                    345 root        21    0 92880K 26292K piperd  0   0:22   0.78% php-fpm: pool nginx (php-fpm){php-fpm}
                                  85280 root        20    0   294M   258M nanslp  1  11:23   0.29% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i em1 -i em1.101 -i em1.201 -i em1.103 --dns
                                  85280 root        20    0   294M   258M nanslp  0  11:07   0.10% /usr/local/bin/ntopng -d /var/db/ntopng -G /var/run/ntopng.pid -s -e -w 0 -W 3000 -i em1 -i em1.101 -i em1.201 -i em1.103 --dns
                                      0 root       -92    -     0K  4320K -       0 480:45   0.00% [kernel{dummynet}]
                                     12 root       -92    -     0K   432K WAIT    0 116:50   0.00% [intr{irq259: re0}]
                                      0 root       -12    -     0K  4320K -       0 116:36   0.00% [kernel{zio_write_issue}]
                                  56232 squid       20    0   434M   367M kqread  1 112:07   0.00% (squid-1) -f /usr/local/etc/squid/squid.conf (squid)
                                      0 root       -92    -     0K  4320K -       1  89:54   0.00% [kernel{em1 taskq}]
                                     12 root       -60    -     0K   432K WAIT    1  80:04   0.00% [intr{swi4: clock (0)}]
                                     18 root       -16    -     0K    16K pftm    0  19:53   0.00% [pf purge]
                                  86720 unbound     20    0 81016K 67500K kqread  1  19:23   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
                                  86720 unbound     20    0 81016K 67500K kqread  1  14:52   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf{unbound}
                                  20896 root        20    0   540M   476M nanslp  1  14:04   0.00% /usr/local/bin/php-cgi -q /usr/local/bin/notify_monitor.php
                                     12 root       -92    -     0K   432K WAIT    0  11:01   0.00% [intr{irq269: re1}]
                                      6 root        -8    -     0K   160K tx->tx  0   9:30   0.00% [zfskern{txg_thread_enter}]
                                  

                                  Thank you,
                                  Ashima

                                  1 Reply Last reply Reply Quote 0
                                  • S
                                    stephenw10 Netgate Administrator
                                    last edited by Sep 26, 2019, 4:31 PM

                                    @ashima said in pfSense memory usage:

                                    20896 root 20 0 540M 476M nanslp 1 14:04 0.00% /usr/local/bin/php-cgi -q /usr/local/bin/notify_monitor.php

                                    That looks odd.

                                    But you also have ntopng running there consuming a lot.

                                    Run at the command line top -aS then press 'o' and enter 'res'.

                                    Steve

                                    A 1 Reply Last reply Sep 26, 2019, 4:51 PM Reply Quote 0
                                    • A
                                      ashima LAYER 8 @stephenw10
                                      last edited by Sep 26, 2019, 4:51 PM

                                      @stephenw10 I am using GUI command line. I am using openvpn to login in these remote machines.Can I use top -aS and press o with GUI.

                                      1 Reply Last reply Reply Quote 0
                                      • S
                                        stephenw10 Netgate Administrator
                                        last edited by Sep 26, 2019, 5:00 PM

                                        You can use top -aS -o res from the gui to get a single snapshot.

                                        1 Reply Last reply Reply Quote 0
                                        • A
                                          ashima LAYER 8
                                          last edited by Sep 26, 2019, 5:10 PM

                                          Here's the output :

                                          last pid: 99113;  load averages:  0.51,  0.36,  0.33  up 35+13:21:14    22:35:49
                                          172 processes: 3 running, 168 sleeping, 1 waiting
                                          
                                          Mem: 84M Active, 620M Inact, 478M Laundry, 2045M Wired, 204K Buf, 587M Free
                                          ARC: 1544M Total, 644M MFU, 812M MRU, 4788K Anon, 12M Header, 71M Other
                                               1322M Compressed, 4466M Uncompressed, 3.38:1 Ratio
                                          Swap: 4096M Total, 15M Used, 4081M Free
                                          
                                          
                                            PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
                                          20896 root          1  20    0   540M   477M nanslp  1  14:05   0.00% /usr/local/bin/php-cgi -q /usr/local/bin/notify_monitor.php
                                          56232 squid         1  20    0   434M   367M kqread  0 112:08   0.00% (squid-1) -f /usr/local/etc/squid/squid.conf (squid)
                                          86720 unbound       2  20    0 81016K 67500K kqread  1  34:17   0.00% /usr/local/sbin/unbound -c /var/unbound/unbound.conf
                                            345 root          1  22    0 92880K 26292K piperd  0   0:23   0.88% php-fpm: pool nginx (php-fpm)
                                          98064 root          1  92   20 90312K 24652K CPU0    0   0:00   0.00% /usr/local/bin/php-cgi -q /usr/local/bin/captiveportal_gather_stats.php commonuser loggedin
                                            346 root          1  52    0 88396K 23400K accept  0   0:22   0.00% php-fpm: pool nginx (php-fpm)
                                          75482 root          1  52    0 92752K 23064K accept  1   0:16   0.00% php-fpm: pool nginx (php-fpm)
                                          94115 squid         1  20    0 21880K 15724K sbwait  1   0:09   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                          94285 squid         1  20    0 21880K 15644K sbwait  0   0:01   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                          94550 squid         1  20    0 21880K 15344K sbwait  0   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                          94791 squid         1  20    0 21880K 15132K sbwait  1   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                           6350 squid         1  20    0 21880K 14912K sbwait  1   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                           6595 squid         1  20    0 19832K 14780K sbwait  0   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                            344 root          1  20    0 88264K 14768K kqread  0   1:29   0.00% php-fpm: master process (/usr/local/lib/php-fpm.conf) (php-fpm)
                                           6729 squid         1  20    0 17784K 13092K sbwait  0   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                          17356 root          1  20    0 12908K 13012K select  1   5:08   0.00% /usr/local/sbin/ntpd -g -c /var/etc/ntpd.conf -p /var/run/ntpd.pid
                                          94399 squid         1  28    0 17784K 12696K sbwait  1   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                           7049 squid         1  20    0 17784K 12696K sbwait  1   0:00   0.00% (squidGuard) -c /usr/local/etc/squidGuard/squidGuard.conf (squidGuard)
                                          

                                          I manually killed the ntopng process.

                                          1 Reply Last reply Reply Quote 0
                                          • First post
                                            Last post
                                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
                                            [[user:consent.lead]]
                                            [[user:consent.not_received]]