Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    CPU Usage Problem PFSense 2.1.5

    General pfSense Questions
    5
    6
    1.7k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      saltygiraffe
      last edited by

      I am running a free hot spot with about 100 users average throughout the day.  We have a 1Gbps internet connection.  We are using the captive portal to provide users access to the internet and limit bandwidth to about 25Mbps.  The problem we are having is that the  server is has a high load average and 100% CPU usage and users are not getting the full 25Mbps. As you can see below, there are about 16 cores.  PFSense is used to provide captive portal, DNS, and DHCP.  We are using v2.1.5 on a Dell R610.

      If we run a test from a device to an internal server, we get 155+Mb/s.  However, anything filtering from PFsense to the internet gets choked.  If we run a test from the WAN, we get over 935Mbs.

      The average output PFSense is seeing is 112Mbps to the WAN.

      We perform a server reboot every day at 3:00am.  Is there any optimization that can be done to improve performance?

      –-------------------------------------------------------------------------

      Below is the output from TOP.

      last pid: 95232;  load averages:  2.06,  2.05,  2.04  up 0+14:03:18    17:36:52
      224 processes: 19 running, 152 sleeping, 53 waiting

      Mem: 212M Active, 61M Inact, 525M Wired, 212K Cache, 39M Buf, 46G Free
      Swap: 64G Total, 64G Free

      PID USERNAME PRI NICE  SIZE    RES STATE  C  TIME  WCPU COMMAND
      73816 root    119    0 27552K  4088K CPU2    2 842:48 100.00% /usr/pbi/bandwidthd-amd64/bandwidthd/bandw
      74746 root    119    0 27552K  4620K CPU11  11 842:48 100.00% /usr/pbi/bandwidthd-amd64/bandwidthd/bandw
        11 root    171 ki31    0K  256K CPU5    5 841:26 100.00% [idle{idle: cpu5}]
        11 root    171 ki31    0K  256K CPU4    4 840:07 100.00% [idle{idle: cpu4}]
        11 root    171 ki31    0K  256K CPU14  14 833:33 100.00% [idle{idle: cpu14}]
        11 root    171 ki31    0K  256K CPU15  15 830:05 100.00% [idle{idle: cpu15}]
        11 root    171 ki31    0K  256K CPU0    0 822:42 100.00% [idle{idle: cpu0}]
        11 root    171 ki31    0K  256K CPU12  12 821:43 100.00% [idle{idle: cpu12}]
        11 root    171 ki31    0K  256K CPU7    7 813:46 100.00% [idle{idle: cpu7}]
        11 root    171 ki31    0K  256K RUN    8 812:07 100.00% [idle{idle: cpu8}]
        11 root    171 ki31    0K  256K CPU13  13 797:13 100.00% [idle{idle: cpu13}]
        11 root    171 ki31    0K  256K CPU1    1 768:29 100.00% [idle{idle: cpu1}]
        11 root    171 ki31    0K  256K CPU6    6 681:54 100.00% [idle{idle: cpu6}]
        11 root    171 ki31    0K  256K CPU9    9 677:19 100.00% [idle{idle: cpu9}]
        11 root    171 ki31    0K  256K CPU10  10 535:30 100.00% [idle{idle: cpu10}]
        11 root    171 ki31    0K  256K CPU3    3 455:53 100.00% [idle{idle: cpu3}]
        12 root    -44    -    0K  848K WAIT  15  13:05  0.39% [intr{swi1: netisr 15}]
      21168 root      44    0  146M 32456K accept  8  0:02  0.29% /usr/local/bin/php

      1 Reply Last reply Reply Quote 0
      • DerelictD
        Derelict LAYER 8 Netgate
        last edited by

        If it were me I'd uninstall bandwidthd.

        Chattanooga, Tennessee, USA
        A comprehensive network diagram is worth 10,000 words and 15 conference calls.
        DO NOT set a source address/port in a port forward or firewall rule unless you KNOW you need it!
        Do Not Chat For Help! NO_WAN_EGRESS(TM)

        1 Reply Last reply Reply Quote 0
        • S
          saltygiraffe
          last edited by

          How stupid of me.  I had tunnel vision.  I feel like a idiot!

          Thanks!

          1 Reply Last reply Reply Quote 0
          • P
            phil.davis
            last edited by

            I have that happen occasionally - bandwidthd 100% CPU. Seen it on 2.1.n and 2.2. If anyone has a clue about how to trigger the problem then we could track it down.

            As the Greek philosopher Isosceles used to say, "There are 3 sides to every triangle."
            If I helped you, then help someone else - buy someone a gift from the INF catalog http://secure.inf.org/gifts/usd/

            1 Reply Last reply Reply Quote 0
            • J
              jwelter99
              last edited by

              @saltygiraffe:

              I am running a free hot spot with about 100 users average throughout the day.  We have a 1Gbps internet connection.  We are using the captive portal to provide users access to the internet and limit bandwidth to about 25Mbps.  The problem we are having is that the  server is has a high load average and 100% CPU usage and users are not getting the full 25Mbps. As you can see below, there are about 16 cores.  PFSense is used to provide captive portal, DNS, and DHCP.  We are using v2.1.5 on a Dell R610.

              If we run a test from a device to an internal server, we get 155+Mb/s.  However, anything filtering from PFsense to the internet gets choked.  If we run a test from the WAN, we get over 935Mbs.

              The average output PFSense is seeing is 112Mbps to the WAN.

              We perform a server reboot every day at 3:00am.  Is there any optimization that can be done to improve performance?

              –-------------------------------------------------------------------------

              Below is the output from TOP.

              last pid: 95232;  load averages:  2.06,  2.05,  2.04  up 0+14:03:18    17:36:52
              224 processes: 19 running, 152 sleeping, 53 waiting

              Mem: 212M Active, 61M Inact, 525M Wired, 212K Cache, 39M Buf, 46G Free
              Swap: 64G Total, 64G Free

              PID USERNAME PRI NICE  SIZE    RES STATE  C  TIME  WCPU COMMAND
              73816 root    119    0 27552K  4088K CPU2    2 842:48 100.00% /usr/pbi/bandwidthd-amd64/bandwidthd/bandw
              74746 root    119    0 27552K  4620K CPU11  11 842:48 100.00% /usr/pbi/bandwidthd-amd64/bandwidthd/bandw
                11 root    171 ki31    0K  256K CPU5    5 841:26 100.00% [idle{idle: cpu5}]
                11 root    171 ki31    0K  256K CPU4    4 840:07 100.00% [idle{idle: cpu4}]
                11 root    171 ki31    0K  256K CPU14  14 833:33 100.00% [idle{idle: cpu14}]
                11 root    171 ki31    0K  256K CPU15  15 830:05 100.00% [idle{idle: cpu15}]
                11 root    171 ki31    0K  256K CPU0    0 822:42 100.00% [idle{idle: cpu0}]
                11 root    171 ki31    0K  256K CPU12  12 821:43 100.00% [idle{idle: cpu12}]
                11 root    171 ki31    0K  256K CPU7    7 813:46 100.00% [idle{idle: cpu7}]
                11 root    171 ki31    0K  256K RUN    8 812:07 100.00% [idle{idle: cpu8}]
                11 root    171 ki31    0K  256K CPU13  13 797:13 100.00% [idle{idle: cpu13}]
                11 root    171 ki31    0K  256K CPU1    1 768:29 100.00% [idle{idle: cpu1}]
                11 root    171 ki31    0K  256K CPU6    6 681:54 100.00% [idle{idle: cpu6}]
                11 root    171 ki31    0K  256K CPU9    9 677:19 100.00% [idle{idle: cpu9}]
                11 root    171 ki31    0K  256K CPU10  10 535:30 100.00% [idle{idle: cpu10}]
                11 root    171 ki31    0K  256K CPU3    3 455:53 100.00% [idle{idle: cpu3}]
                12 root    -44    -    0K  848K WAIT  15  13:05  0.39% [intr{swi1: netisr 15}]
              21168 root      44    0  146M 32456K accept  8  0:02  0.29% /usr/local/bin/php

              You may want to install Squid in such an application.  For busy hotspots it can have quite a dramatic impact.

              1 Reply Last reply Reply Quote 0
              • M
                Mr. Jingles
                last edited by

                @saltygiraffe:

                We have a 1Gbps internet connection.

                You're a backbone to the AMX?

                ;D

                6 and a half billion people know that they are stupid, agressive, lower life forms.

                1 Reply Last reply Reply Quote 0
                • First post
                  Last post
                Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.