Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    New PPPoE module (if_pppoe) causes high cpu usage and cause lagging

    Scheduled Pinned Locked Moved Routing and Multi WAN
    4 Posts 2 Posters 60 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • T
      tlex
      last edited by

      Current setup :

      PfSense CE 2.8.0 (ufs install) virtualized on Proxmox.
      No device passthrough used.
      Host is a I5-8500T, 96gb of ram, nvme disks and with dual X710 10gbps SFP card.
      VM has 4 cores / 8gb ram and 3 virtio 10gbps interfaces.
      mtu is set to 9000 on the lan interface and left to default for the wan.

      My isp is Bell Canada (3gbps) with a modem bypass using a XGS-PON SFP+ from ECI

      Now, the vm cpu usage used to be between 2% and 5% before I tried to enable if_pppoe and now it stays steady at 20% without any major network load. I used to go up to 20-30% under more intense network activity but now this goes up to 70-90%.
      I know this isn't scientific but browsing is lagging, accessing PfSense webui is also pretty laggy some times (situation that I never had before). So I guess I must have something set wrong but I can't figure... Looking at the regular logs, I don't see anything different or evident.

      This returns nothing under a speedtest / load test:

      dtrace -n 'fbt::if_sendq_enqueue:return / arg1 != 0 / { stack(); printf("=> %d", arg1); }'
      

      I do have the following tuning set :

      net.link.ifqmaxlen="2048"
      net.isr.defaultqlimit="2048"
      net.isr.dispatch="deferred"
      net.isr.maxthreads="-1"
      net.isr.bindthreads="1"
      legal.intel_ipw.license_ack="1"
      legal.intel_iwi.license_ack="1"
      machdep.hyperthreading_allowed="0"
      

      running Speedtest from the vm itself i can see the error count increase drastically in the interface statistics :

      e4ff4f13-154e-46f8-b950-e5b7421655d9-image.png

      T 1 Reply Last reply Reply Quote 0
      • T
        tlex @tlex
        last edited by

        I found my issue !
        I used to have an old VIP attached to the WAN interface.
        It wasn't making any issue when using MPD but ended up creating problems with IF_PPPOE.
        VIP removed, all good now :)

        1 Reply Last reply Reply Quote 0
        • M
          mr_nets
          last edited by

          Interesting, does it means that the new PPPoE driver is not HA compatible, your VIP was CARP or ALIAS ?

          T 1 Reply Last reply Reply Quote 0
          • T
            tlex @mr_nets
            last edited by

            @mr_nets Alias

            1 Reply Last reply Reply Quote 0
            • First post
              Last post
            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.