Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    APU2C2: max brandwith input issue

    Scheduled Pinned Locked Moved General pfSense Questions
    4 Posts 2 Posters 1.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H Offline
      HanXHX
      last edited by

      Hi,

      I have some problems with my APU2C2 with pfsense latest stable version (amd64) fully upgraded. When I use iperf / speedtest-cli I don't have same results as another device on my LAN.

      My max brandwith is ~950Mb down / ~250Mb up. However, on my APU : ~ 550Mb / ~240Mb.

      Test are done on the same switch. APU is client, no routing / fw is open.

      Powerd : Hiadaptive
      LRO/TSO : enabled or disable -> same results

      I try theses options, but i can't see any improvement: (https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards)

      hw.igb.fc_setting=0
      kern.ipc.nmbclusters="1000000"

      Any idea/advice?

      Thanks!

      HanXHX

      htop
      All my core are 100% used.

      speedtest-cli

      Retrieving speedtest.net configuration…
      Retrieving speedtest.net server list...
      Testing from Orange (90.65.XXXXX)...
      Selecting best server based on latency...
      Hosted by Orange (Lyon) [2.85 km]: 14.365 ms
      Testing download speed….....................................
      Download: 308.54 Mbit/s
      Testing upload speed..................................................
      Upload: 162.64 Mbit/s

      Iperf down on APU

      [2.3.2-RELEASE][admin@XXXXXXXXXx]/root: iperf3 -P 8 -R -c XXXXXXXXXXXXXXXX
      ….....
      [SUM]  0.00-10.00  sec  669 MBytes  561 Mbits/sec  125            sender
      [SUM]  0.00-10.00  sec  653 MBytes  548 Mbits/sec                  receiver

      Iperf up on APU

      [2.3.2-RELEASE][admin@XXXXXXXXXXXXXXX]/root: iperf3 -P 8 -c XXXXXXXXXXXXXXX
      …
      [SUM]  0.00-10.00  sec  288 MBytes  242 Mbits/sec  588            sender
      [SUM]  0.00-10.00  sec  286 MBytes  240 Mbits/sec                  receiver

      top -aSH (while running iperf down)

      last pid: 65203;  load averages:  0.99,  0.51,  0.22                                                                                                up 0+07:47:16  08:21:55
      131 processes: 7 running, 96 sleeping, 28 waiting
      CPU:  2.6% user,  0.0% nice, 46.5% system,  0.0% interrupt, 50.9% idle
      Mem: 16M Active, 76M Inact, 166M Wired, 996K Cache, 77M Buf, 1576M Free
      Swap:

      PID USERNAME PRI NICE  SIZE    RES STATE  C  TIME    WCPU COMMAND
        11 root    155 ki31    0K    64K RUN    0 462:10  98.10% [idle{idle: cpu0}]
        11 root    155 ki31    0K    64K CPU1    1 462:18  92.38% [idle{idle: cpu1}]
        11 root    155 ki31    0K    64K RUN    2 461:59  57.96% [idle{idle: cpu2}]
          0 root    -92    -    0K  256K CPU3    3  0:34  51.17% [kernel{igb2 que}]
        11 root    155 ki31    0K    64K RUN    3 461:08  46.78% [idle{idle: cpu3}]
      65043 root      84    0 17692K  3432K CPU2    2  0:05  39.70% iperf3 -P 8 -R -c XXXXXXXXXXXXXXX
          0 root    -92    -    0K  256K -      2  0:54  5.57% [kernel{igb2 que}]
      43499 root      21    0  262M 32124K accept  1  0:01  1.07% php-fpm: pool nginx (php-fpm)
        12 root    -92    -    0K  448K WAIT    1  0:07  0.20% [intr{irq263: igb2:que}]
      65203 root      20    0 21856K  3008K CPU0    0  0:00  0.10% top -aSH
          0 root    -16    -    0K  256K swapin  0  0:50  0.00% [kernel{swapper}]
        12 root    -60    -    0K  448K WAIT    0  0:22  0.00% [intr{swi4: clock}]
      37776 root      52  20 17000K  2440K wait    2  0:10  0.00% /bin/sh /var/db/rrd/updaterrd.sh
          5 root    -16    -    0K    16K pftm    0  0:09  0.00% [pf purge]
        12 root    -92    -    0K  448K WAIT    0  0:08  0.00% [intr{irq262: igb2:que}]
      20429 root      20    0 39136K  7028K kqread  0  0:05  0.00% nginx: worker process (nginx)
      20120 root      20    0 39136K  7060K kqread  1  0:05  0.00% nginx: worker process (nginx)
      42394 root      20    0 14408K  1952K select  2  0:04  0.00% /usr/sbin/powerd -b hadp -a hadp -n hadp
        52 root      -8    -    0K    16K mdwait  0  0:04  0.00% [md1]
        15 root    -16    -    0K    16K -      0  0:04  0.00% [rand_harvestq]

      systat -vmstat 1 (while running iperf down)

      3 users    Load  0.93  0.56  0.26                  Dec 23 08:23

      Mem:KB    REAL            VIRTUAL                      VN PAGER  SWAP PAGER
              Tot  Share      Tot    Share    Free          in  out    in  out
      Act  101492    8508  1142828    10084 1613528  count
      All  110516  13588  1196180    50096          pages
      Proc:                                                            Interrupts
        r  p  d  s  w  Csw  Trp  Sys  Int  Sof  Flt        ioflt  3378 total
        1          45      869  427  99k        13            cow        uart0 4
                                                                zfod        ehci0 18
      73.6%Sys  0.0%Intr  1.6%User  0.0%Nice 24.8%Idle        ozfod      ahci0 19
      |    |    |    |    |    |    |    |    |    |          %ozfod  1122 cpu0:timer
      =====================================>                    daefr      igb0:link
                                              6 dtbuf          prcfr      igb2:que 0
      Namei    Name-cache  Dir-cache    110362 desvn          totfr      igb2:que 1
        Calls    hits  %    hits  %      1033 numvn          react      igb2:link
            3      3 100                  190 frevn          pdwak  1120 cpu1:timer
                                                              7 pdpgs    15 cpu3:timer
      Disks  md0  md1  ada0 pass0                            intrn  1121 cpu2:timer
      KB/t  0.00  0.00  0.00  0.00                      169792 wire
      tps      0    0    0    0                      17844 act
      MB/s  0.00  0.00  0.00  0.00                      78268 inact
      %busy    0    0    0    0                        996 cache
                                                        1612532 free
                                                          79076 buf

      netstat -m

      9279/6156/15435 mbufs in use (current/cache/total)
      4524/4086/8610/1000000 mbuf clusters in use (current/cache/total/max)
      4524/4078 mbuf+clusters out of packet secondary zone in use (current/cache)
      0/870/870/58732 4k (page size) jumbo clusters in use (current/cache/total/max)
      0/0/0/17402 9k jumbo clusters in use (current/cache/total/max)
      0/0/0/9788 16k jumbo clusters in use (current/cache/total/max)
      11496K/13191K/24687K bytes allocated to network (current/cache/total)
      0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
      0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
      0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
      0/0/0 requests for jumbo clusters denied (4k/9k/16k)
      0 requests for sfbufs denied
      0 requests for sfbufs delayed
      0 requests for I/O initiated by sendfile

      systat -iostat 1 (while running iperf down)
                          /0  /1  /2  /3  /4  /5  /6  /7  /8  /9  /10
          Load Average  |||

      /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
      cpu  user|
          nice|
        system|XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
      interrupt|
          idle|XXXXXXXXXXXX

      /0%  /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
      md0  MB/s
            tps|
      md1  MB/s
            tps|
      ada0  MB/s
            tps|
      pass0 MB/s
            tps|

      1 Reply Last reply Reply Quote 0
      • ? This user is from outside of this forum
        Guest
        last edited by

        My max brandwith is ~950Mb down / ~250Mb up. However, on my APU : ~ 550Mb / ~240Mb.

        I would guess that you will not be able to archive the full throughput of ~950 MBit/s and ~250 MBit/s.
        If you have a small switch that must be not having any features it could be a dump switch without management
        you should be able to set up a pc as server in fron of the WAN port (together with the small switch) and then another
        PC as client and then you should do the iPerf test again. That will be coming closer to a real result you can count on!
        Together with iPerf you will be able to set up -p 8 so you would be sending 8 streams that should saturate better your
        WAN port!!!

        Test are done on the same switch. APU is client, no routing / fw is open.

        PC is client and PC is server through the pfSense and no other way please!

        Powerd : Hiadaptive
        LRO/TSO : enabled or disable -> same results

        PowerD: hiadaptive
        LRO/TSO: enabled
        mbuf size 250000 or 500000

        all together will be mostly showing a better result then all as a single and nothing is changing!

        1 Reply Last reply Reply Quote 0
        • H Offline
          HanXHX
          last edited by

          Another test:

          [PC] –----------------- [ Switch ] –---- [APU]
          192.168.1.18                                    192.168.1.254

          PC hosts iperf server.

          From APU:
          iperf3 -c 192.168.1.18 ===> ~940Mb/sec (OK !)
          iperf3 -R -c 192.168.1.18 ====>  ~280Mb/sec
          iperf3 -P4 -R -c 192.168.1.18 ====>  ~570Mb/sec
          iperf3 -P8 -R -c 192.168.1.18 ====>  ~580Mb/sec

          I have same results when APU hosts iperf server.

          1 Reply Last reply Reply Quote 0
          • ? This user is from outside of this forum
            Guest
            last edited by

            [PC] ------------------- [ Switch ] ------ [APU]
            192.168.1.18                                    192.168.1.254
            

            It should be more like this, through the APU and not in another way.

            WAN throughput:
            PC (iPerf server) –-------- Switch ---------- WAN Port--[APU]–LAN Port--PC (iPerf client)

            LAN throughput:
            APU –-------- PC1 (iPerf client) und PC2 (iPerf server) direct on APU

            1 Reply Last reply Reply Quote 0
            • First post
              Last post
            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.