Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    Terrible performance on SYS-5018A-FTN4 (c2758)

    Scheduled Pinned Locked Moved General pfSense Questions
    2 Posts 2 Posters 1.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H Offline
      Howard2937011
      last edited by

      I recently purchased a SYS-5018A-FTN4 for my Cisco lab and discovered that I always transmit around ~250 megabits between hosts (Highest I've ever seen is ~280). Here's my setup:

      1. I installed pfsense.
      2. I configured nmbclusters to 1000000
      3. I put two hosts on a single port (igb1), separated them by vlan, and gave them IPs via DHCP.
      4. I ran "iperf -s" on one host and "iperf -c 192.168.2.2 -t 60" on another.

      Results:

      $ iperf -c 192.168.2.2 -t 60
      ------------------------------------------------------------
      Client connecting to 192.168.2.2, TCP port 5001
      TCP window size:  208 KByte (default)
      ------------------------------------------------------------
      [  3] local 192.168.1.2 port 63463 connected with 192.168.2.2 port 5001
      [ ID] Interval       Transfer     Bandwidth
      [  3]  0.0-60.0 sec  1.73 GBytes   247 Mbits/sec
      

      While this is happening (about 30 seconds in), here is the reuslt of "top -aSH":

      last pid: 89932;  load averages:  0.12,  0.05,  0.06                                                   up 0+00:20:14  10:21:51
      196 processes: 10 running, 123 sleeping, 63 waiting
      CPU:  0.0% user,  0.0% nice,  0.0% system,  4.3% interrupt, 95.7% idle
      Mem: 79M Active, 35M Inact, 221M Wired, 38M Buf, 7533M Free
      Swap: 16G Total, 16G Free
      
        PID USERNAME PRI NICE   SIZE    RES STATE   C   TIME    WCPU COMMAND
         11 root     155 ki31     0K   128K RUN     0  20:06 100.00% [idle{idle: cpu0}]
         11 root     155 ki31     0K   128K CPU2    2  20:01 100.00% [idle{idle: cpu2}]
         11 root     155 ki31     0K   128K CPU1    1  19:58 100.00% [idle{idle: cpu1}]
         11 root     155 ki31     0K   128K CPU3    3  19:58 100.00% [idle{idle: cpu3}]
         11 root     155 ki31     0K   128K CPU6    6  19:57 100.00% [idle{idle: cpu6}]
         11 root     155 ki31     0K   128K CPU7    7  19:43 100.00% [idle{idle: cpu7}]
         11 root     155 ki31     0K   128K CPU5    5  19:59  98.58% [idle{idle: cpu5}]
         11 root     155 ki31     0K   128K CPU4    4  19:45  78.86% [idle{idle: cpu4}]
         12 root     -92    -     0K  1024K CPU4    4   0:17  24.56% [intr{irq270: igb1:que}]
         12 root     -92    -     0K  1024K WAIT    5   0:03   5.27% [intr{irq271: igb1:que}]
          0 root     -16    0     0K   672K swapin  0   0:50   0.00% [kernel{swapper}]
         12 root     -92    -     0K  1024K WAIT    7   0:18   0.00% [intr{irq273: igb1:que}]
         12 root     -60    -     0K  1024K WAIT    5   0:05   0.00% [intr{swi4: clock}]
         12 root     -92    -     0K  1024K WAIT    1   0:03   0.00% [intr{irq267: igb1:que}]
         12 root     -92    -     0K  1024K WAIT    3   0:03   0.00% [intr{irq269: igb1:que}]
         12 root     -92    -     0K  1024K WAIT    6   0:01   0.00% [intr{irq272: igb1:que}]
         12 root     -92    -     0K  1024K WAIT    2   0:01   0.00% [intr{irq268: igb1:que}]
      
      

      According to this, it would seem that the CPUs are mostly idling.

      Does anyone have similar performance? Any hints or tuning parameters would be very much appreciated. Thanks.

      1 Reply Last reply Reply Quote 0
      • ? This user is from outside of this forum
        Guest
        last edited by

        I would suggest to connect on device at one port and another PC to another port without VLANs because
        now you know only the inter VLAN throughput and nothing more.

        • mbuf size to 1000000
        • PowerD (hi adaptive)
        • enable TRIM support in pfSense

        Would be the most common settings to tune this board, but anyway if you are running pfSense 2.2.6 amd64
        full install on a SSD or HDD it must be more available then ~250 MBit/s as others would report here in the
        forum. And for the LAN speed it should be nearly 1 GBit/s at all. For the WAN speed it is mostly only not so
        fast related to the circumstance that the PPPoE Internet connection is only running on a single CPU core.

        1 Reply Last reply Reply Quote 0
        • First post
          Last post
        Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.