Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    CPU Usage when network used

    Scheduled Pinned Locked Moved Problems Installing or Upgrading pfSense Software
    99 Posts 7 Posters 18.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stephenw10S
      stephenw10 Netgate Administrator
      last edited by

      Mmm, but all the interrupt loading is on one queue. Do you have a PPPoE WAN?

      The single thread performance of the N3700 is... not good. And potentially much worse if turbo/burst is not working.

      Do you see any significant improvement if you disable ntop-ng?

      Steve

      1 Reply Last reply Reply Quote 0
      • Q
        qwaven
        last edited by

        yes the WAN is PPPoE. Would there be something I can do to use more queues properly?

        I can try and turn ntop off later to see what happens.

        Cheers!

        1 Reply Last reply Reply Quote 0
        • stephenw10S
          stephenw10 Netgate Administrator
          last edited by stephenw10

          Ah! OK then, currently, you are limited to a single queue on the PPPoE interface and hence a single core.

          See: https://redmine.pfsense.org/issues/4821

          And the upstream: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=203856

          You can probably get some performance by setting the sysctl net.isr.dispatch to deferred in Sys > Adv > System Tunables. That will require a reboot.

          https://docs.netgate.com/pfsense/en/latest/hardware/tuning-and-troubleshooting-network-cards.html#pppoe-with-multi-queue-nics

          Steve

          1 Reply Last reply Reply Quote 0
          • Q
            qwaven
            last edited by

            tried the dispatch
            sysctl net.isr.dispatch
            net.isr.dispatch: deferred

            cpu seemed to about 50% utilization.

            interrupt total rate
            cpu0:timer 122117 254
            cpu2:timer 121707 253
            cpu3:timer 116674 243
            cpu1:timer 115728 241
            irq256: ahci0 11720 24
            irq257: xhci0 2850 6
            irq258: hdac0 2 0
            irq260: t5nex0:evt 2 0
            irq269: igb0:que 0 659069 1372
            irq270: igb0:que 1 1457 3
            irq271: igb0:que 2 516 1
            irq272: igb0:que 3 515 1
            irq273: igb0:link 3 0
            irq274: pcib5 1 0
            irq280: pcib6 1 0
            irq286: pcib7 1 0
            irq287: igb3:que 0 453042 943
            irq288: igb3:que 1 573830 1194
            irq289: igb3:que 2 755133 1572
            irq290: igb3:que 3 438318 912
            irq291: igb3:link 3 0
            irq292: pcib8 1 0
            Total 3372690 7020

            1 Reply Last reply Reply Quote 0
            • Q
              qwaven
              last edited by

              Also now tried disabling ntop cpu usage looks to be maybe 8-10% less.

              1 Reply Last reply Reply Quote 0
              • stephenw10S
                stephenw10 Netgate Administrator
                last edited by

                Is that total CPU was 50%? Did throughput increase?

                Steve

                1 Reply Last reply Reply Quote 0
                • Q
                  qwaven
                  last edited by

                  That would be what was shown on the dashboard for cpu performance. If utilization is stuck on 1 core I am not sure if there would be anything else we can do.

                  As for throughput, it was about the same but I am not worrying about that as the source for the transfer may impact this as well. Ideally it would be great to see it closer to my actual speed but I'm not sure about testing it reliably.

                  Cheers!

                  1 Reply Last reply Reply Quote 0
                  • Q
                    qwaven
                    last edited by

                    Hi again,

                    I'm assuming we've exhausted trying to improve the cpu utilization with this but I just wanted to say thanks for the help/efforts with this. I am still open to try anything though.

                    Cheers!

                    1 Reply Last reply Reply Quote 0
                    • stephenw10S
                      stephenw10 Netgate Administrator
                      last edited by

                      I suspect it might be. The single thread performance of that CPU is about equal to that of the Pentium M I used to run and that was good fpr ~650Mbps. At least according to this:
                      https://www.cpubenchmark.net/compare/Intel-Core2-Duo-E4500-vs-Intel-Pentium-N3700-vs-Intel-Pentium-M-1.73GHz/936vs2513vs1160
                      Obviously that's synthetic and there are many variable etc. No PPPoE overhead in that test either.
                      The E4500 can pass Gigabit, just. (at full size TCP packets...many variables etc!).

                      If that is to be believed then it probably is running burst mode and I'm not sure there's much we can do before RSS is re-written in FreeBSD to allow multiple cores.

                      You probably could see better performance off-loading the PPPoE to another device. That would probably mean a double NAT scenario unfortunately.

                      Steve

                      1 Reply Last reply Reply Quote 0
                      • Q
                        qwaven
                        last edited by

                        Hi Steve,

                        It's unfortunate about this RSS issue. I have another board that I plan to try out, however its quite overkill especially if only 1 core is going to be used for pppoe. However it does have some better on board hardware that may help overall. It is however still just 2ghz/core.

                        https://www.supermicro.com/products/motherboard/atom/A2SDi-H-TP4F.cfm

                        Cheers!

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          Yes. I have a PPPoE WAN but fortunately/unfortunately it's no where near fast enough to worry about this. 😉

                          No benchmarks for the C3958 but if we assume it's the same as the C3858 but with 4 more cores then it should make about ~40% better single thread performance.

                          It does seem like a waste of cores unless you virtualise it.

                          Steve

                          1 Reply Last reply Reply Quote 0
                          • Q
                            qwaven
                            last edited by

                            Hi Steve,

                            So I flipped it over. Performance so far looks drastically better. CPU in the gui was about 5-6% while transferring over pppoe. I believe still just the 1 core.

                            PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
                            11 root 155 ki31 0K 256K CPU1 1 7:39 97.26% [idle{idle: cpu1}]
                            11 root 155 ki31 0K 256K CPU10 10 7:41 97.12% [idle{idle: cpu10}]
                            11 root 155 ki31 0K 256K CPU13 13 7:33 96.96% [idle{idle: cpu13}]
                            11 root 155 ki31 0K 256K CPU7 7 7:45 96.85% [idle{idle: cpu7}]
                            11 root 155 ki31 0K 256K CPU11 11 7:38 96.51% [idle{idle: cpu11}]
                            11 root 155 ki31 0K 256K RUN 4 7:43 96.46% [idle{idle: cpu4}]
                            11 root 155 ki31 0K 256K CPU3 3 7:44 96.46% [idle{idle: cpu3}]
                            11 root 155 ki31 0K 256K CPU9 9 7:36 96.26% [idle{idle: cpu9}]
                            11 root 155 ki31 0K 256K CPU5 5 7:42 95.99% [idle{idle: cpu5}]
                            11 root 155 ki31 0K 256K RUN 8 7:19 95.56% [idle{idle: cpu8}]
                            11 root 155 ki31 0K 256K CPU6 6 7:42 95.12% [idle{idle: cpu6}]
                            11 root 155 ki31 0K 256K CPU2 2 7:42 94.98% [idle{idle: cpu2}]
                            11 root 155 ki31 0K 256K CPU12 12 7:40 93.93% [idle{idle: cpu12}]
                            11 root 155 ki31 0K 256K RUN 15 7:35 87.04% [idle{idle: cpu15}]
                            11 root 155 ki31 0K 256K CPU14 14 7:31 82.95% [idle{idle: cpu14}]
                            11 root 155 ki31 0K 256K RUN 0 7:24 79.60% [idle{idle: cpu0}]

                            irq298: ix0:q0 2716423 6058
                            irq299: ix0:q1 244578 545
                            irq300: ix0:q2 461159 1029
                            irq301: ix0:q3 243416 543
                            irq302: ix0:q4 378891 845
                            irq303: ix0:q5 124788 278
                            irq304: ix0:q6 478729 1068
                            irq305: ix0:q7 125913 281
                            irq306: ix0:link 1 0
                            irq307: ix1:q0 326596 728
                            irq308: ix1:q1 254938 569
                            irq309: ix1:q2 614196 1370
                            irq310: ix1:q3 250402 558
                            irq311: ix1:q4 388996 868
                            irq312: ix1:q5 128709 287
                            irq313: ix1:q6 492403 1098
                            irq314: ix1:q7 130143 290
                            irq315: ix1:link 1 0

                            ix0 is pppoe and ix1 is internal lans.

                            I was thinking about virtualizing. However I've seen so many talks about people suggesting this is not a great choice for a firewall. However I'm open to exploring this more. Do you have any thoughts? Proxmox was my first choice.

                            Cheers!

                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              Nice, what sort of throughput were you seeing at that point?

                              I can't really advise on hypervisors, I'm not using anything right now.

                              A lot of people here are using Proxmox though. ESXi is also popular.

                              Steve

                              1 Reply Last reply Reply Quote 0
                              • Q
                                qwaven
                                last edited by

                                Same throughput but I believe this is more because of the source. I have not had a chance to test internally the network to see if anything there is improved. Will update once I have.

                                1 Reply Last reply Reply Quote 0
                                • Q
                                  qwaven
                                  last edited by

                                  so testing with iperf3, I still don't seem to be getting anywhere close to 10G bandwidth.

                                  It looks about spot on with 1G.

                                  [ 41] 0.00-10.00 sec 56.4 MBytes 47.4 Mbits/sec 3258 sender
                                  [ 41] 0.00-10.00 sec 56.4 MBytes 47.3 Mbits/sec receiver
                                  [ 43] 0.00-10.00 sec 58.1 MBytes 48.8 Mbits/sec 3683 sender
                                  [ 43] 0.00-10.00 sec 58.0 MBytes 48.6 Mbits/sec receiver
                                  [SUM] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec 69930 sender
                                  [SUM] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver

                                  Any ideas?

                                  This is literally SFP+ 10G interface on pfsense to switch to fileserver. The file server has two 10G bonded links. Nothing else running.

                                  Cheers!

                                  1 Reply Last reply Reply Quote 0
                                  • stephenw10S
                                    stephenw10 Netgate Administrator
                                    last edited by

                                    How many processes are you running there?

                                    You have 8 queues so I don't expect to any advantage over 8.

                                    Is that result testing over 1G? What do you actually see over 10G?
                                    I would anticipate something ~4Gbps maybe. Though if you're running iperf on the firewall it may reduce that.

                                    Steve

                                    1 Reply Last reply Reply Quote 0
                                    • Q
                                      qwaven
                                      last edited by

                                      My test with iperf was sending 20 connections (what I saw someones example on the internets doing) and it looks pretty much to saturate if it were 1G.

                                      This is not 1G. This is using my internal network. Pfsense reports it as 10G, the switch is all 10G, and the file server has 2x10G.

                                      Curious why would iperf on the firewall reduce this?

                                      fyi cpu did not appear stressed in any way.

                                      Cheers!

                                      1 Reply Last reply Reply Quote 0
                                      • stephenw10S
                                        stephenw10 Netgate Administrator
                                        last edited by

                                        That seems far too much like a 1G link limit to be coincidence.

                                        Check that each part is actually linked at 10G.

                                        Steve

                                        1 Reply Last reply Reply Quote 0
                                        • Q
                                          qwaven
                                          last edited by

                                          so on my pfsense I can see all my internal interface vlans are listed with:

                                          media: Ethernet autoselect (10Gbase-T <full-duplex>)

                                          on my NAS I see the bonded interfaces:

                                          Settings for eth4:
                                          Supported ports: [ FIBRE ]
                                          Supported link modes: 1000baseKX/Full
                                          10000baseKR/Full
                                          Supported pause frame use: Symmetric Receive-only
                                          Supports auto-negotiation: No
                                          Advertised link modes: 1000baseKX/Full
                                          10000baseKR/Full
                                          Advertised pause frame use: Symmetric
                                          Advertised auto-negotiation: No
                                          Speed: 10000Mb/s
                                          Duplex: Full

                                          Port: Direct Attach Copper
                                          PHYAD: 0
                                          Transceiver: internal
                                          Auto-negotiation: off
                                          Cannot get wake-on-lan settings: Operation not permitted
                                          Current message level: 0x00000014 (20)
                                          link ifdown
                                          Link detected: yes

                                          Settings for eth5:
                                          Supported ports: [ FIBRE ]
                                          Supported link modes: 1000baseKX/Full
                                          10000baseKR/Full
                                          Supported pause frame use: Symmetric Receive-only
                                          Supports auto-negotiation: No
                                          Advertised link modes: 1000baseKX/Full
                                          10000baseKR/Full
                                          Advertised pause frame use: Symmetric
                                          Advertised auto-negotiation: No
                                          Speed: 10000Mb/s
                                          Duplex: Full

                                          Port: Direct Attach Copper
                                          PHYAD: 0
                                          Transceiver: internal
                                          Auto-negotiation: off
                                          Cannot get wake-on-lan settings: Operation not permitted
                                          Current message level: 0x00000014 (20)
                                          link ifdown
                                          Link detected: yes

                                          On the switch:

                                          0/3 PC Mbr Enable Auto D 10G Full Up Enable Enable Disable (nas)
                                          0/4 PC Mbr Enable Auto D 10G Full Up Enable Enable Disable (nas)
                                          ...
                                          0/16 Enable Auto 10G Full Up Enable Enable Disable (pfsense)

                                          1 Reply Last reply Reply Quote 0
                                          • GrimsonG
                                            Grimson Banned
                                            last edited by

                                            Do you use traffic shaping/limiters?

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.