Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    EM0 High Interrupt, Enabling polling brings down the interface. 2.0.2-RELEASE

    Scheduled Pinned Locked Moved Hardware
    12 Posts 3 Posters 4.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • W
      wallabybob
      last edited by

      A snapshot build of pfSense 2.1 might work better for you. Snapshot builds of pfSense 2.1 have more up to date device drivers than build of pfSense 2.0.x.

      1 Reply Last reply Reply Quote 0
      • C
        cncking2000
        last edited by

        I just did the update to the latest snapshot 2.1 there was. Problem is still occuring. On a machine that I have pinging the router…

        I am only testing the polling using the command ifconfig em0 polling and ifconfig em0 -polling

        64 bytes from 10.1.10.1: icmp_req=399 ttl=64 time=0.204 ms
        64 bytes from 10.1.10.1: icmp_req=400 ttl=64 time=0.207 ms
        64 bytes from 10.1.10.1: icmp_req=401 ttl=64 time=0.257 ms
        64 bytes from 10.1.10.1: icmp_req=402 ttl=64 time=0.105 ms
        64 bytes from 10.1.10.1: icmp_req=403 ttl=64 time=0.167 ms
        64 bytes from 10.1.10.1: icmp_req=404 ttl=64 time=0.221 ms

        Enable Polling

        64 bytes from 10.1.10.1: icmp_req=405 ttl=64 time=11085 ms
        64 bytes from 10.1.10.1: icmp_req=406 ttl=64 time=10085 ms
        64 bytes from 10.1.10.1: icmp_req=407 ttl=64 time=9085 ms
        64 bytes from 10.1.10.1: icmp_req=408 ttl=64 time=8085 ms
        64 bytes from 10.1.10.1: icmp_req=409 ttl=64 time=7086 ms
        64 bytes from 10.1.10.1: icmp_req=410 ttl=64 time=6086 ms
        64 bytes from 10.1.10.1: icmp_req=411 ttl=64 time=5086 ms
        64 bytes from 10.1.10.1: icmp_req=412 ttl=64 time=4086 ms
        64 bytes from 10.1.10.1: icmp_req=413 ttl=64 time=3086 ms
        64 bytes from 10.1.10.1: icmp_req=414 ttl=64 time=2087 ms
        64 bytes from 10.1.10.1: icmp_req=415 ttl=64 time=1087 ms
        64 bytes from 10.1.10.1: icmp_req=416 ttl=64 time=87.2 ms

        Disable Polling

        64 bytes from 10.1.10.1: icmp_req=417 ttl=64 time=0.243 ms
        64 bytes from 10.1.10.1: icmp_req=418 ttl=64 time=0.146 ms
        64 bytes from 10.1.10.1: icmp_req=419 ttl=64 time=0.164 ms
        64 bytes from 10.1.10.1: icmp_req=420 ttl=64 time=0.167 ms

        Re-enable polling

        64 bytes from 10.1.10.1: icmp_req=421 ttl=64 time=9365 ms
        64 bytes from 10.1.10.1: icmp_req=422 ttl=64 time=8365 ms
        64 bytes from 10.1.10.1: icmp_req=423 ttl=64 time=7366 ms
        64 bytes from 10.1.10.1: icmp_req=424 ttl=64 time=6366 ms
        64 bytes from 10.1.10.1: icmp_req=425 ttl=64 time=5366 ms
        64 bytes from 10.1.10.1: icmp_req=426 ttl=64 time=4366 ms
        64 bytes from 10.1.10.1: icmp_req=427 ttl=64 time=3367 ms
        64 bytes from 10.1.10.1: icmp_req=428 ttl=64 time=2367 ms
        64 bytes from 10.1.10.1: icmp_req=429 ttl=64 time=1367 ms
        64 bytes from 10.1.10.1: icmp_req=430 ttl=64 time=368 ms

        Disable Polling.

        64 bytes from 10.1.10.1: icmp_req=431 ttl=64 time=0.133 ms
        64 bytes from 10.1.10.1: icmp_req=432 ttl=64 time=0.240 ms
        64 bytes from 10.1.10.1: icmp_req=433 ttl=64 time=0.154 ms
        64 bytes from 10.1.10.1: icmp_req=434 ttl=64 time=0.239 ms
        64 bytes from 10.1.10.1: icmp_req=435 ttl=64 time=0.145 ms
        64 bytes from 10.1.10.1: icmp_req=436 ttl=64 time=0.305 ms
        64 bytes from 10.1.10.1: icmp_req=437 ttl=64 time=0.111 ms

        1 Reply Last reply Reply Quote 0
        • W
          wallabybob
          last edited by

          @cncking2000:

          Problem is still occuring.

          What problem? Latency when polling is enabled? High interrupt load?

          You haven't presented any evidence of high interrupt load. Please post output of pfSense shell command```
          vmstat -i

          1 Reply Last reply Reply Quote 0
          • C
            cncking2000
            last edited by

            All that is seen below is without polling. Enabling polling drops the connections, and will not work.

            [2.1-BETA1][admin@router.*****.com]/root(31): vmstat -i
            interrupt                          total       rate
            irq1: atkbd0                         374          0
            irq15: ata1                           68          0
            irq17: atapci1                    100178         30
            irq18: em0                        774085        237
            irq19: sis0                       439173        134
            cpu0: timer                      6521961       1999
            Total                            7835839       2402

            Problem is incredibly high latency with polling enabled.

            [2.1-BETA1][admin@router.*****.com]/root(32): top -SH

            last pid: 87256;  load averages:  1.43,  0.77,  0.41    up 0+01:01:42  15:43:49
            128 processes: 4 running, 104 sleeping, 20 waiting
            CPU:  0.0% user,  0.0% nice,  100% system,  0.0% interrupt,  0.0% idle
            Mem: 76M Active, 257M Inact, 92M Wired, 12M Cache, 56M Buf, 13M Free
            Swap: 1024M Total, 1024M Free

            PID USERNAME PRI NICE   SIZE    RES STATE    TIME   WCPU COMMAND
               0 root     -68    0     0K    64K -        2:03 62.26% kernel{em0 taskq}

            IPerf

            pca@PCA-Linux ~ $ iperf -c 10.1.10.1 -d -i 2 -t 50
            –----------------------------------------------------------
            Server listening on TCP port 5001
            TCP window size: 85.3 KByte (default)


            Client connecting to 10.1.10.1, TCP port 5001
            TCP window size:  169 KByte (default)

            [  5] local 10.1.10.121 port 35687 connected with 10.1.10.1 port 5001
            [  4] local 10.1.10.121 port 5001 connected with 10.1.10.1 port 50877
            [ ID] Interval       Transfer     Bandwidth
            [  5]  0.0- 2.0 sec  78.9 MBytes   331 Mbits/sec
            [  4]  0.0- 2.0 sec  47.3 MBytes   198 Mbits/sec
            [  5]  2.0- 4.0 sec  87.6 MBytes   368 Mbits/sec
            [  4]  2.0- 4.0 sec  30.4 MBytes   128 Mbits/sec
            [  5]  4.0- 6.0 sec  82.0 MBytes   344 Mbits/sec
            [  4]  4.0- 6.0 sec  39.9 MBytes   168 Mbits/sec
            [  5]  6.0- 8.0 sec  81.5 MBytes   342 Mbits/sec
            [  4]  6.0- 8.0 sec  40.6 MBytes   170 Mbits/sec
            [  5]  8.0-10.0 sec   101 MBytes   423 Mbits/sec
            [  4]  8.0-10.0 sec  23.9 MBytes   100 Mbits/sec
            [  5] 10.0-12.0 sec  79.6 MBytes   334 Mbits/sec
            [  4] 10.0-12.0 sec  45.4 MBytes   190 Mbits/sec
            [  5] 12.0-14.0 sec  90.0 MBytes   377 Mbits/sec
            [  4] 12.0-14.0 sec  35.3 MBytes   148 Mbits/sec
            [  5] 14.0-16.0 sec  68.0 MBytes   285 Mbits/sec
            [  4] 14.0-16.0 sec  56.9 MBytes   239 Mbits/sec
            [  5] 16.0-18.0 sec  77.8 MBytes   326 Mbits/sec
            [  4] 16.0-18.0 sec  46.8 MBytes   196 Mbits/sec
            [  5] 18.0-20.0 sec  90.2 MBytes   379 Mbits/sec
            [  4] 18.0-20.0 sec  35.5 MBytes   149 Mbits/sec
            [  5] 20.0-22.0 sec  91.1 MBytes   382 Mbits/sec
            [  4] 20.0-22.0 sec  35.1 MBytes   147 Mbits/sec
            [  5] 22.0-24.0 sec  92.5 MBytes   388 Mbits/sec
            [  4] 22.0-24.0 sec  33.9 MBytes   142 Mbits/sec
            [  5] 24.0-26.0 sec  89.9 MBytes   377 Mbits/sec
            [  4] 24.0-26.0 sec  36.6 MBytes   154 Mbits/sec
            [  5] 26.0-28.0 sec  91.9 MBytes   385 Mbits/sec
            [  4] 26.0-28.0 sec  34.4 MBytes   144 Mbits/sec
            [  5] 28.0-30.0 sec  92.6 MBytes   388 Mbits/sec
            [  4] 28.0-30.0 sec  33.8 MBytes   142 Mbits/sec
            [  5] 30.0-32.0 sec  89.6 MBytes   376 Mbits/sec
            [  4] 30.0-32.0 sec  36.2 MBytes   152 Mbits/sec
            [  5] 32.0-34.0 sec  87.9 MBytes   369 Mbits/sec
            [  4] 32.0-34.0 sec  38.7 MBytes   162 Mbits/sec
            [  5] 34.0-36.0 sec  82.1 MBytes   344 Mbits/sec
            [  4] 34.0-36.0 sec  44.5 MBytes   187 Mbits/sec
            [  5] 36.0-38.0 sec  88.9 MBytes   373 Mbits/sec
            [  4] 36.0-38.0 sec  37.3 MBytes   156 Mbits/sec
            [  5] 38.0-40.0 sec  83.0 MBytes   348 Mbits/sec
            [  4] 38.0-40.0 sec  42.9 MBytes   180 Mbits/sec
            [  5] 40.0-42.0 sec  85.4 MBytes   358 Mbits/sec
            [  4] 40.0-42.0 sec  40.5 MBytes   170 Mbits/sec
            [  5] 42.0-44.0 sec  83.0 MBytes   348 Mbits/sec
            [  4] 42.0-44.0 sec  41.5 MBytes   174 Mbits/sec
            [  5] 44.0-46.0 sec  89.6 MBytes   376 Mbits/sec
            [  4] 44.0-46.0 sec  32.9 MBytes   138 Mbits/sec
            [  5] 46.0-48.0 sec  86.5 MBytes   363 Mbits/sec
            [  4] 46.0-48.0 sec  39.9 MBytes   167 Mbits/sec
            [  5] 48.0-50.0 sec  78.9 MBytes   331 Mbits/sec
            [  5]  0.0-50.0 sec  2.10 GBytes   361 Mbits/sec
            [  4] 48.0-50.0 sec  47.9 MBytes   201 Mbits/sec
            [  4]  0.0-50.0 sec   978 MBytes   164 Mbits/sec

            1 Reply Last reply Reply Quote 0
            • W
              wallabybob
              last edited by

              @cncking2000:

              Problem is incredibly high latency with polling enabled.

              Don't use polling then! (I will comment later on polling.)

              @cncking2000:

              [2.1-BETA1][admin@router.*****.com]/root(32): top -SH

              last pid: 87256;  load averages:  1.43,  0.77,  0.41    up 0+01:01:42  15:43:49
              128 processes: 4 running, 104 sleeping, 20 waiting
              CPU:  0.0% user,  0.0% nice,  100% system,  0.0% interrupt,  0.0% idle
              Mem: 76M Active, 257M Inact, 92M Wired, 12M Cache, 56M Buf, 13M Free
              Swap: 1024M Total, 1024M Free

              PID USERNAME PRI NICE   SIZE    RES STATE    TIME   WCPU COMMAND
                 0 root     -68    0     0K    64K -        2:03 62.26% kernel{em0 taskq}

              What was going on when you took this snapshot? The iperf run?

              Your vmstat output doesn't show excessive interrupt rates, but the rates shown are averaged since startup, not "instantaneous" rates. The top output shows a fully loaded CPU, most of the load apparently in em0 taskq which might be quite reasonable, depending on what the system was supposed to be doing. You have a single CPU, a 3GHz Celeron.

              If you want something to happen on the polling issue you most likely will have to report it to FreeBSD, say on web page: []http://www.freebsd.org/send-pr.html](http://www.freebsd.org/send-pr.html[url) and provide the output of pfSense shell command```
              pciconf -l

              
              I don't know the precise meaning of the iperf numbers reported but my gut feel is that they are possibly low for the type of CPU you have spending 100% in system "mode" (as reported by top). I am guessing you have an iperf server installed on your pfSense box, and that the iperf report segment
              
              > [  5]  8.0-10.0 sec  101 MBytes  423 Mbits/sec
              > [  4]  8.0-10.0 sec  23.9 MBytes  100 Mbits/sec
              
              means that in the interval 8 to 10 seconds from start of test, one stream sent and received at 423Mbps and another stream sent and received at 100Mbps which would make for a total of 2 x (423+100) Mbps or about 1Gbps processed by the box which would pretty much saturate a PCI bus (might be relevant because there are PCI em devices). A bit more context would be helpful in interpreting the numbers you have provided.
              1 Reply Last reply Reply Quote 0
              • C
                cncking2000
                last edited by

                Yeah, the testing is using IPerf. I can get far better numbers with the same test on a linux machine running the IPerf server, with identical hardware. I have re-ran the tests on both to make sure. Also changed to a single thread.

                IPerf on Server

                pca@PCA-Linux ~ $ iperf -c 10.1.10.71 -i 2 -t 50
                –----------------------------------------------------------
                Client connecting to 10.1.10.71, TCP port 5001
                TCP window size: 22.9 KByte (default)

                [  3] local 10.1.10.121 port 34919 connected with 10.1.10.71 port 5001
                [ ID] Interval      Transfer    Bandwidth
                [  3]  0.0- 2.0 sec  205 MBytes  860 Mbits/sec
                [  3]  2.0- 4.0 sec  217 MBytes  911 Mbits/sec
                [  3]  4.0- 6.0 sec  219 MBytes  919 Mbits/sec
                [  3]  6.0- 8.0 sec  217 MBytes  911 Mbits/sec
                [  3]  8.0-10.0 sec  218 MBytes  915 Mbits/sec
                [  3] 10.0-12.0 sec  218 MBytes  916 Mbits/sec
                [  3] 12.0-14.0 sec  218 MBytes  915 Mbits/sec
                [  3] 14.0-16.0 sec  218 MBytes  914 Mbits/sec
                [  3] 16.0-18.0 sec  218 MBytes  914 Mbits/sec
                [  3] 18.0-20.0 sec  218 MBytes  915 Mbits/sec
                [  3] 20.0-22.0 sec  219 MBytes  918 Mbits/sec
                [  3] 22.0-24.0 sec  219 MBytes  917 Mbits/sec
                [  3] 24.0-26.0 sec  217 MBytes  912 Mbits/sec
                [  3] 26.0-28.0 sec  218 MBytes  914 Mbits/sec
                [  3] 28.0-30.0 sec  218 MBytes  915 Mbits/sec
                [  3] 30.0-32.0 sec  219 MBytes  920 Mbits/sec
                [  3] 32.0-34.0 sec  218 MBytes  913 Mbits/sec
                [  3] 34.0-36.0 sec  218 MBytes  913 Mbits/sec
                [  3] 36.0-38.0 sec  219 MBytes  920 Mbits/sec
                [  3] 38.0-40.0 sec  217 MBytes  910 Mbits/sec
                [  3] 40.0-42.0 sec  218 MBytes  915 Mbits/sec
                [  3] 42.0-44.0 sec  219 MBytes  920 Mbits/sec
                [  3] 44.0-46.0 sec  217 MBytes  910 Mbits/sec
                [  3] 46.0-48.0 sec  218 MBytes  915 Mbits/sec
                [  3] 48.0-50.0 sec  218 MBytes  916 Mbits/sec
                [  3]  0.0-50.0 sec  5.31 GBytes  913 Mbits/sec
                pca@PCA-Linux ~ $

                Top on Linux Box.

                root@cloud:~# top -SH

                top - 16:50:12 up 9 days,  1:03,  2 users,  load average: 0.00, 0.00, 0.00
                Tasks: 130 total,  1 running, 129 sleeping,  0 stopped,  0 zombie
                Cpu(s):  0.7%us, 11.3%sy,  0.0%ni, 86.4%id,  0.0%wa,  0.0%hi,  1.7%si,  0.0%st
                Mem:  1553960k total,  336700k used,  1217260k free,    23408k buffers
                Swap:  3034104k total,    31976k used,  3002128k free,  131940k cached

                PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
                6774 root      20  0 27848 1064  908 S 13.9  0.1  0:01.86 iperf

                IPerf on Router

                pca@PCA-Linux ~ $ iperf -c 10.1.10.1 -i 2 -t 50
                –----------------------------------------------------------
                Client connecting to 10.1.10.1, TCP port 5001
                TCP window size: 22.9 KByte (default)

                [  3] local 10.1.10.121 port 36129 connected with 10.1.10.1 port 5001
                [ ID] Interval      Transfer    Bandwidth
                [  3]  0.0- 2.0 sec  120 MBytes  502 Mbits/sec
                [  3]  2.0- 4.0 sec  120 MBytes  505 Mbits/sec
                [  3]  4.0- 6.0 sec  116 MBytes  485 Mbits/sec
                [  3]  6.0- 8.0 sec  117 MBytes  490 Mbits/sec
                [  3]  8.0-10.0 sec  117 MBytes  489 Mbits/sec
                [  3] 10.0-12.0 sec  117 MBytes  490 Mbits/sec
                [  3] 12.0-14.0 sec  120 MBytes  503 Mbits/sec
                [  3] 14.0-16.0 sec  120 MBytes  501 Mbits/sec
                [  3] 16.0-18.0 sec  118 MBytes  497 Mbits/sec
                [  3] 18.0-20.0 sec  119 MBytes  498 Mbits/sec
                [  3] 20.0-22.0 sec  120 MBytes  504 Mbits/sec
                [  3] 22.0-24.0 sec  120 MBytes  504 Mbits/sec
                [  3] 24.0-26.0 sec  120 MBytes  502 Mbits/sec
                [  3] 26.0-28.0 sec  121 MBytes  506 Mbits/sec
                [  3] 28.0-30.0 sec  121 MBytes  506 Mbits/sec
                [  3] 30.0-32.0 sec  123 MBytes  516 Mbits/sec
                [  3] 32.0-34.0 sec  122 MBytes  513 Mbits/sec
                [  3] 34.0-36.0 sec  123 MBytes  516 Mbits/sec
                [  3] 36.0-38.0 sec  123 MBytes  515 Mbits/sec
                [  3] 38.0-40.0 sec  124 MBytes  522 Mbits/sec
                [  3] 40.0-42.0 sec  124 MBytes  522 Mbits/sec
                [  3] 42.0-44.0 sec  125 MBytes  524 Mbits/sec
                [  3] 44.0-46.0 sec  125 MBytes  523 Mbits/sec
                [  3] 46.0-48.0 sec  124 MBytes  522 Mbits/sec
                [  3] 48.0-50.0 sec  124 MBytes  522 Mbits/sec
                [  3]  0.0-50.0 sec  2.95 GBytes  507 Mbits/sec
                pca@PCA-Linux ~ $

                Router TOP during test:

                last pid: 44285;  load averages:  0.84,  0.62,  0.30    up 0+01:57:56  16:40:03
                127 processes: 4 running, 103 sleeping, 20 waiting
                CPU:  6.1% user,  0.0% nice, 93.9% system,  0.0% interrupt,  0.0% idle
                Mem: 104M Active, 236M Inact, 81M Wired, 16M Cache, 56M Buf, 14M Free
                Swap: 1024M Total, 1024M Free

                PID USERNAME PRI NICE  SIZE    RES STATE    TIME  WCPU COMMAND
                    0 root    -68    0    0K    64K -        4:48 67.29% kernel{em0 taskq}

                1 Reply Last reply Reply Quote 0
                • C
                  cncking2000
                  last edited by

                  Anything at all? This router is separating servers from the local network, and I cannot continue to use this router with the sub par performance. I would like to be able to, but not being able to utilize a gigabit network is a bit frustrating.

                  1 Reply Last reply Reply Quote 0
                  • W
                    wallabybob
                    last edited by

                    @cncking2000:

                    Anything at all?

                    A few things.

                    1. As I pointed out in a previous reply, it is possible your iperf performance on your pfSense box is limited by the hardware. Please post the output of pfSense command```
                    pciconf -l -v

                    
                    2\. Your description doesn't describe clearly the difference in configuration of the tests involving the Linux server and pfSense. Is exactly the same hardware used to run Linux and pfSense? Are the interfaces exactly the same down to the exact same bus type and NIC chipset?
                    
                    3\. iperf is a useful benchmark but you shouldn't extrapolate iperf performance to packet forwarding performance of a router because iperf moves data between user space and kernel space (an operation well known to be  high overhead operation) while routers forward packets between interfaces WITHOUT moving data between kernel and user space.
                    
                    @cncking2000:
                    
                    > This router is separating servers from the local network,
                    
                    I suggest the first thing would be to verify the hardware you have chosen is actually capable of moving data at gigabit speeds between two Gigabit interfaces. If the interface to the servers and the interface to the "local network" are both on the same "standard" PCI bus then it is a physical impossibility to get sustained gigabit speeds between the interfaces. Your pfSense startup output mentions AGP, SiS and doesn't mention SATA all of which suggests to me that the box is of a generation unlikely to have any more than a single standard PCI bus.
                    1 Reply Last reply Reply Quote 0
                    • C
                      cmb
                      last edited by

                      You're expecting gigabit wire speed from the sounds of it. Not going to get it with a Celeron. You're wanting Mustang performance out of a Pinto. The diff in cache between a Celeron and an equivalent Pentium 4 makes for a significant performance difference as a firewall. Commercial firewalls that will push a gigabit wire speed are upwards of $10K USD, and you're trying to use a box that probably wouldn't sell on ebay for $50.

                      Ditch the sis NIC and Celeron proc and have nothing on a PCI bus, either PCI-X or PCI-e only. Don't use polling.

                      1 Reply Last reply Reply Quote 0
                      • C
                        cncking2000
                        last edited by

                        So, What you are suggesting is that I try a HT 3.2Ghz P4 and a 4x PCI-E Dual Intel network card in the 16x PCI-E slot since I have an onboard video in this other box? I can't convince management to spend much on this, but it sure the hell beats using the old Linksys. This facility is too large for a simple router.

                        1 Reply Last reply Reply Quote 0
                        • W
                          wallabybob
                          last edited by

                          @cncking2000:

                          So, What you are suggesting is that I try a HT 3.2Ghz P4 and a 4x PCI-E Dual Intel network card in the 16x PCI-E slot since I have an onboard video in this other box?

                          And run a more appropriate benchmark.

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.