Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    High ping and packet loss in local network

    Scheduled Pinned Locked Moved General pfSense Questions
    29 Posts 5 Posters 16.9k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • B
      bullet92
      last edited by

      Hi to all!
      I have a big problem with high ping and packet loss with my pfsense.. I have the last version of it [2.1-RELEASE (i386) built on Wed Sep 11 18:16:22 EDT 2013] and i''ve also tried x64 version.

      I have a multiwan, multilan configuration and the issue happen more with load.
      This is my hardware configuration:

      sysctl -a | egrep -i 'hw.machine|hw.model|hw.ncpu

      hw.machine: i386
      hw.model: Intel(R) Xeon(R) CPU            5120  @ 1.86GHz
      hw.ncpu: 4
      hw.machine_arch: i386
      
      

      info on first page of pfsense

      State table size 	2% (3469/202000)
      MBUF Usage 	7% (17782/262144)
      Load average 	1.36, 1.27, 1.12
      CPU usage 	  27%
      Memory usage 	  11% of 2027 MB
      Disk usage 	  18% of 1.8G 
      

      top

      last pid: 23073;  load averages:  1.46,  1.30,  1.15                                                   up 0+00:39:33  16:36:58
      37 processes:  1 running, 36 sleeping
      CPU:  2.1% user,  0.0% nice, 26.9% system,  0.1% interrupt, 71.0% idle
      Mem: 63M Active, 19M Inact, 151M Wired, 1036K Cache, 85M Buf, 1756M Free
      

      em2 and em3 are motherboard's NIC
      em0 and em1 are a dual HP NIC, with intel chipset (as you see)

      pciconf -lv

      em0@pci0:3:0:0: class=0x020000 card=0x7044103c chip=0x105e8086 rev=0x06 hdr=0x00
          class      = network
          subclass   = ethernet
      em1@pci0:3:0:1: class=0x020000 card=0x7044103c chip=0x105e8086 rev=0x06 hdr=0x00
          class      = network
          subclass   = ethernet
      em2@pci0:4:0:0: class=0x020000 card=0x63801462 chip=0x10968086 rev=0x01 hdr=0x00
          class      = network
          subclass   = ethernet
      em3@pci0:4:0:1: class=0x020000 card=0x63801462 chip=0x10968086 rev=0x01 hdr=0x00
          class      = network
          subclass   = ethernet
      
      

      vmstat -i

      interrupt                          total       rate
      irq14: ata0                           57          0
      irq19: uhci1+                       4154          3
      cpu0: timer                       506443        399
      irq256: em0                            1          0
      irq257: em1                       123598         97
      cpu3: timer                       506419        399
      cpu2: timer                       506418        399
      cpu1: timer                       506418        399
      Total                            2153508       1699
      
      

      /boot/loader.conf

      comconsole_speed="9600"
      hw.usb.no_pf="1"
      
      kern.ipc.nmbclusters="262144"
      kern.ipc.somaxconn="4096"
      
      #kern.ipc.maxsockets="204800"   |   \
      #kern.ipc.nmbjumbop="192000"    | -- Already tried to put on, no changes
      #kern.sched.slice="1"            |   /
      
      hw.em.rxd="4096"
      hw.em.txd="4096"
      hw.em.fc_setting="0"
      
      net.inet.tcp.sendbuf_max=16777216
      net.inet.tcp.recvbuf_max=16777216
      net.inet.tcp.sendbuf_inc=16384
      net.inet.tcp.recvbuf_inc=524288
      net.inet.tcp.cc.algorithm=htcp
      net.inet.tcp.recvspace=1024000
      net.inet.tcp.sendspace=1024000
      

      Interface configuration:

       WAN (wan)       -> em1        -> v4: 192.168.1.130/24
       DMZ (lan)       -> em0        -> v4: 172.16.30.5/24
       WAN2 (opt1)     -> em1_vlan13 -> v4: 192.168.2.5/24
       WIBRI (opt2)    -> em3        -> v4: 192.168.168.5/24
       SEDE (opt3)     -> em0_vlan10 -> v4: 192.168.132.5/24
       WAN3 (opt4)     -> em2        -> v4: 217.xx.xx.30/27
       MANAG (opt5)    -> em3_vlan999 -> v4: 192.168.167.5/24
      
      

      ifconfig

      em0: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
              options=521db <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,tso4,wol_magic,vlan_hwfilter,vlan_hwtso>ether 00:26:55:e3:3f:66
              inet6 fe80::226:55ff:fee3:3f66%em0 prefixlen 64 scopeid 0x1
              inet 172.16.30.5 netmask 0xffffff00 broadcast 172.16.30.255
              nd6 options=1 <performnud>media: Ethernet autoselect (100baseTX <full-duplex>)
              status: active
      em1: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
              options=5019b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,tso4,vlan_hwfilter,vlan_hwtso>ether 00:26:55:e3:3f:67
              inet6 fe80::226:55ff:fee3:3f67%em1 prefixlen 64 scopeid 0x2
              inet 192.168.1.130 netmask 0xffffff00 broadcast 192.168.1.255
              nd6 options=1 <performnud>media: Ethernet autoselect (1000baseT <full-duplex>)
              status: active
      em2: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
              options=521db <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,tso4,wol_magic,vlan_hwfilter,vlan_hwtso>ether 00:1d:92:d7:b4:b4
              inet6 fe80::21d:92ff:fed7:b4b4%em2 prefixlen 64 scopeid 0x3
              inet 217.xx.xx.30 netmask 0xffffffe0 broadcast 217.xx.xx.31
              inet 217.xx.xx.29 netmask 0xffffffe0 broadcast 217.xx.xx.31
              nd6 options=1 <performnud>media: Ethernet autoselect (1000baseT <full-duplex>)
              status: active
      em3: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
              options=521db <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,tso4,wol_magic,vlan_hwfilter,vlan_hwtso>ether 00:1d:92:d7:b4:b5
              inet6 fe80::21d:92ff:fed7:b4b5%em3 prefixlen 64 scopeid 0x4
              inet 192.168.168.5 netmask 0xffffff00 broadcast 192.168.168.255
              nd6 options=1 <performnud>media: Ethernet autoselect (1000baseT <full-duplex>)
              status: active
      lo0: flags=8049 <up,loopback,running,multicast>metric 0 mtu 16384
              options=3 <rxcsum,txcsum>inet 127.0.0.1 netmask 0xff000000
              inet6 ::1 prefixlen 128
              inet6 fe80::1%lo0 prefixlen 64 scopeid 0x5
              nd6 options=3 <performnud,accept_rtadv>enc0: flags=0<> metric 0 mtu 1536
      pflog0: flags=100 <promisc>metric 0 mtu 33192
      pfsync0: flags=0<> metric 0 mtu 1460
              syncpeer: 224.0.0.240 maxupd: 128 syncok: 1
      em0_vlan10: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
              options=103 <rxcsum,txcsum,tso4>ether 00:26:55:e3:3f:66
              inet6 fe80::226:55ff:fee3:3f66%em0_vlan10 prefixlen 64 scopeid 0x9
              inet 192.168.132.5 netmask 0xffffff00 broadcast 192.168.132.255
              nd6 options=1 <performnud>media: Ethernet autoselect (100baseTX <full-duplex>)
              status: active
              vlan: 10 vlanpcp: 0 parent interface: em0
      em3_vlan999: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
              options=103 <rxcsum,txcsum,tso4>ether 00:1d:92:d7:b4:b5
              inet6 fe80::226:55ff:fee3:3f66%em3_vlan999 prefixlen 64 scopeid 0xa
              inet 192.168.167.5 netmask 0xffffff00 broadcast 192.168.167.255
              nd6 options=1 <performnud>media: Ethernet autoselect (1000baseT <full-duplex>)
              status: active
              vlan: 999 vlanpcp: 0 parent interface: em3
      em1_vlan13: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1496
              options=103 <rxcsum,txcsum,tso4>ether 00:26:55:e3:3f:67
              inet6 fe80::226:55ff:fee3:3f66%em1_vlan13 prefixlen 64 scopeid 0xb
              inet 192.168.2.5 netmask 0xffffff00 broadcast 192.168.2.255
              nd6 options=1 <performnud>media: Ethernet autoselect (1000baseT <full-duplex>)
              status: active
              vlan: 13 vlanpcp: 0 parent interface: em1
      em2_vlan12: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
              options=103 <rxcsum,txcsum,tso4>ether 00:1d:92:d7:b4:b4
              inet6 fe80::226:55ff:fee3:3f66%em2_vlan12 prefixlen 64 scopeid 0xc
              nd6 options=3 <performnud,accept_rtadv>media: Ethernet autoselect (1000baseT <full-duplex>)
              status: active
              vlan: 12 vlanpcp: 0 parent interface: em2
      em2_vlan13: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
              options=103 <rxcsum,txcsum,tso4>ether 00:1d:92:d7:b4:b4
              inet6 fe80::226:55ff:fee3:3f66%em2_vlan13 prefixlen 64 scopeid 0xd
              nd6 options=3 <performnud,accept_rtadv>media: Ethernet autoselect (1000baseT <full-duplex>)
              status: active
              vlan: 13 vlanpcp: 0 parent interface: em2</full-duplex></performnud,accept_rtadv></rxcsum,txcsum,tso4></up,broadcast,running,simplex,multicast></full-duplex></performnud,accept_rtadv></rxcsum,txcsum,tso4></up,broadcast,running,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,tso4></up,broadcast,running,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,tso4></up,broadcast,running,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,tso4></up,broadcast,running,simplex,multicast></promisc></performnud,accept_rtadv></rxcsum,txcsum></up,loopback,running,multicast></full-duplex></performnud></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,tso4,wol_magic,vlan_hwfilter,vlan_hwtso></up,broadcast,running,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,tso4,wol_magic,vlan_hwfilter,vlan_hwtso></up,broadcast,running,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,tso4,vlan_hwfilter,vlan_hwtso></up,broadcast,running,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,polling,vlan_hwcsum,tso4,wol_magic,vlan_hwfilter,vlan_hwtso></up,broadcast,running,simplex,multicast>
      

      Problem happen on em1 and em1_vlan13:

      PING 192.168.2.1 (192.168.2.1) from 192.168.2.5: 56 data bytes
      64 bytes from 192.168.2.1: icmp_seq=0 ttl=64 time=46.213 ms
      64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=19.940 ms
      64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=6.502 ms
      64 bytes from 192.168.2.1: icmp_seq=3 ttl=64 time=22.061 ms
      64 bytes from 192.168.2.1: icmp_seq=4 ttl=64 time=14.110 ms
      64 bytes from 192.168.2.1: icmp_seq=5 ttl=64 time=0.195 ms <–-- THIS SHOULD BE THE CORRECT PING, because this is the internal WAN2 gateway
      64 bytes from 192.168.2.1: icmp_seq=6 ttl=64 time=19.935 ms
      64 bytes from 192.168.2.1: icmp_seq=7 ttl=64 time=8.359 ms
      64 bytes from 192.168.2.1: icmp_seq=8 ttl=64 time=0.194 ms
      64 bytes from 192.168.2.1: icmp_seq=9 ttl=64 time=28.343 ms

      --- 192.168.2.1 ping statistics ---
      10 packets transmitted, 10 packets received, 0.0% packet loss
      round-trip min/avg/max/stddev = 0.194/16.585/46.213/13.346 ms

      PING 192.168.1.254 (192.168.1.254) from 192.168.1.130: 56 data bytes
      64 bytes from 192.168.1.254: icmp_seq=0 ttl=64 time=24.387 ms
      64 bytes from 192.168.1.254: icmp_seq=1 ttl=64 time=0.670 ms <–-- THIS SHOULD BE THE CORRECT PING, because this is the internal WAN gateway
      64 bytes from 192.168.1.254: icmp_seq=4 ttl=64 time=16.875 ms
      64 bytes from 192.168.1.254: icmp_seq=5 ttl=64 time=10.469 ms
      64 bytes from 192.168.1.254: icmp_seq=6 ttl=64 time=60.434 ms
      64 bytes from 192.168.1.254: icmp_seq=7 ttl=64 time=16.825 ms
      64 bytes from 192.168.1.254: icmp_seq=8 ttl=64 time=4.241 ms
      64 bytes from 192.168.1.254: icmp_seq=9 ttl=64 time=81.450 ms

      --- 192.168.1.254 ping statistics ---
      10 packets transmitted, 8 packets received, 20.0% packet loss
      round-trip min/avg/max/stddev = 0.670/26.919/81.450/26.879 ms

      Packet loss and high ping on internal network to WAN have as a consequence that internet experience would be very bad.

      I've tried to:
      1. disable loadbalancig, all the traffic on working WAN3(em2), but ping is worsen
      2. put all the wan on em2 with vlan, but ping is worsen
      3. change cable
      3. put down em3 (the network that consume more bandwith to wan, so traffic is approximatly 0) and the ping is BETTER.
      4. enable/disable polling, tso, checksum

      What can be the problem?
      Please help me =)
      if you need some information, ask me, regards.

      EDIT: added txt information

      limits.txt
      [netstat -q.txt](/public/imported_attachments/1/netstat -q.txt)
      [netstat -s.txt](/public/imported_attachments/1/netstat -s.txt)
      [sysctl dev.em.txt](/public/imported_attachments/1/sysctl dev.em.txt)
      [vmstats -z.txt](/public/imported_attachments/1/vmstats -z.txt)

      1 Reply Last reply Reply Quote 0
      • johnpozJ
        johnpoz LAYER 8 Global Moderator
        last edited by

        Your pinging from pfsense to a another router? over a switch network and seeing high response time, doesn't that point more to the IP your pinging having issues vs pfsense?

        Or just in general the connection being loaded, or the switches your running through?

        pfsense sends out packet a X time, packet now comes back to pfsense at X+Y time..  Pfsense has nothing to do with that Y time - packet has already been put on the wire.

        what kind of traffic are you pushing how about below info

        systat -ifstat 1

        and then say

        netstat -i -b -n -I interface

        An intelligent man is sometimes forced to be drunk to spend time with his fools
        If you get confused: Listen to the Music Play
        Please don't Chat/PM me for help, unless mod related
        SG-4860 24.11 | Lab VMs 2.8, 24.11

        1 Reply Last reply Reply Quote 0
        • B
          bullet92
          last edited by

          Yes, this is my ping from pfsense to another router and your argument seems to be correct, but if i ping from another machine on the same switch i'm obtaining the correct ping value:

          ping -c 10 192.168.1.254
          PING 192.168.1.254 (192.168.1.254) 56(84) bytes of data.
          64 bytes from 192.168.1.254: icmp_req=1 ttl=64 time=0.943 ms
          64 bytes from 192.168.1.254: icmp_req=2 ttl=64 time=0.955 ms
          64 bytes from 192.168.1.254: icmp_req=3 ttl=64 time=1.13 ms
          64 bytes from 192.168.1.254: icmp_req=4 ttl=64 time=1.01 ms
          64 bytes from 192.168.1.254: icmp_req=5 ttl=64 time=0.954 ms
          64 bytes from 192.168.1.254: icmp_req=6 ttl=64 time=0.987 ms
          64 bytes from 192.168.1.254: icmp_req=7 ttl=64 time=1.11 ms
          64 bytes from 192.168.1.254: icmp_req=8 ttl=64 time=0.991 ms
          64 bytes from 192.168.1.254: icmp_req=9 ttl=64 time=1.16 ms
          64 bytes from 192.168.1.254: icmp_req=10 ttl=64 time=0.939 ms
          
          --- 192.168.1.254 ping statistics ---
          10 packets transmitted, 10 received, 0% packet loss, time 9013ms
          rtt min/avg/max/mdev = 0.939/1.019/1.165/0.092 ms
          
          
          ping -c 10 192.168.2.1
          PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
          64 bytes from 192.168.2.1: icmp_req=1 ttl=64 time=0.029 ms
          64 bytes from 192.168.2.1: icmp_req=2 ttl=64 time=0.027 ms
          64 bytes from 192.168.2.1: icmp_req=3 ttl=64 time=0.032 ms
          64 bytes from 192.168.2.1: icmp_req=4 ttl=64 time=0.033 ms
          64 bytes from 192.168.2.1: icmp_req=5 ttl=64 time=0.031 ms
          64 bytes from 192.168.2.1: icmp_req=6 ttl=64 time=0.028 ms
          64 bytes from 192.168.2.1: icmp_req=7 ttl=64 time=0.032 ms
          64 bytes from 192.168.2.1: icmp_req=8 ttl=64 time=0.033 ms
          64 bytes from 192.168.2.1: icmp_req=9 ttl=64 time=0.033 ms
          64 bytes from 192.168.2.1: icmp_req=10 ttl=64 time=0.032 ms
          
          --- 192.168.2.1 ping statistics ---
          10 packets transmitted, 10 received, 0% packet loss, time 8998ms
          rtt min/avg/max/mdev = 0.027/0.031/0.033/0.002 ms
          
          

          and as you can see the load is not high (i think)

          systat -ifstat 1

                              /0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
               Load Average   ||
          
                Interface           Traffic               Peak                Total
               em1_vlan13  in    756.323 KB/s        756.323 KB/s            1.568 GB
                           out   107.138 KB/s        152.320 KB/s          795.237 MB
          
               em0_vlan10  in      3.422 KB/s          3.978 KB/s           22.661 MB
                           out    15.332 KB/s         15.489 KB/s          215.933 MB
          
                      lo0  in      0.231 KB/s          0.529 KB/s           53.768 KB
                           out     0.231 KB/s          0.529 KB/s           53.768 KB
          
                      em3  in    134.747 KB/s        172.430 KB/s          867.618 MB
                           out  1018.783 KB/s          1.200 MB/s            2.663 GB
          
                      em2  in     12.804 KB/s        158.291 KB/s          119.529 MB
                           out     9.595 KB/s         29.871 KB/s          115.247 MB
          
                      em1  in   1004.992 KB/s          1.165 MB/s            2.589 GB
                           out   107.702 KB/s        123.642 KB/s          670.917 MB
          
                      em0  in      4.167 KB/s         15.851 KB/s           85.611 MB
                           out     8.410 KB/s         13.980 KB/s          154.535 MB
          
          

          here there is some packet error, but i have the same problem on em1 that haven't packet error,
          furthermore if i put this vlan on another interface i haven't packet error in this screen, but the issue of ping still remain.
          tho avoid this i've tried to lower the em1_vlan13 related mtu, without any changes
          netstat -i -b -n -I em1_vlan13

          Name               Mtu Network       Address              Ipkts Ierrs Idrop     Ibytes    Opkts Oerrs     Obytes  Coll
          em1_vlan13        1496 <link#11>     00:26:55:e3:3f:67  1751465     0     0 1785886411  1428784 42076  855071125     0
          em1_vlan13        1496 fe80::226:55f fe80::226:55ff:fe        0     -     -          0        2     -        152     -
          em1_vlan13        1496 192.168.2.0/2 192.168.2.5           8644     -     -     553798        9     -        636     -</link#11>
          

          netstat -i -b -n -I em1

          Name               Mtu Network       Address              Ipkts Ierrs Idrop     Ibytes    Opkts Oerrs     Obytes  Coll
          em1               1500 <link#2>      00:26:55:e3:3f:67  2883590     0     0 2920905353  2283212     0  720613514     0
          em1               1500 fe80::226:55f fe80::226:55ff:fe        0     -     -          0        1     -         76     -
          em1               1500 192.168.1.0/2 192.168.1.130        21341     -     -    2327731       20     -       1294     -</link#2>
          

          netstat -i -b -n -I em0

          Name               Mtu Network       Address              Ipkts Ierrs Idrop     Ibytes    Opkts Oerrs     Obytes  Coll
          em0               1500 <link#1>      00:26:55:e3:3f:66   416717     0     0   92553286   432120     0  165618866     0
          em0               1500 fe80::226:55f fe80::226:55ff:fe        0     -     -          0        0     -          0     -
          em0               1500 172.16.30.0/2 172.16.30.5           2228     -     -     141767     2210     -     190879     -</link#1>
          

          netstat -i -b -n -I em3

          Name               Mtu Network       Address              Ipkts Ierrs Idrop     Ibytes    Opkts Oerrs     Obytes  Coll
          em3               1500 <link#4>      00:1d:92:d7:b4:b5  3091414     0     0  942983836  3436801     0 3053508653     0
          em3               1500 fe80::21d:92f fe80::21d:92ff:fe        0     -     -          0        2     -        152     -
          em3               1500 192.168.168.0 192.168.168.5         7964     -     -     533711     8000     -    1042855     -</link#4>
          

          waiting for your reply, regards :)

          1 Reply Last reply Reply Quote 0
          • N
            NOYB
            last edited by

            I have a similar problem.  Though I've never been able to produce it with a manual ping.

            What I see is that the gateway monitor of a local router will suddenly begin having intermittent packet loss after pfSense has be online anywhere from a few days to couple weeks.  Restarting pfSense, with no action taken on the target gateway, fixes it, for awhile.

            pfSense has single physical interface connected to a smart switch (Cisco SG200-8).
            LAN uses the physical.
            WAN uses VLAN 98 to ISP
            TVLAN uses VLAN 97 to a local router (Actiontec MI424-WR) (this is the one with the issue)  (average ping is about 0.4 ms)

            Been going on for quite some time.  Posted question about it in one of these forums a few months ago.

            2.1-RELEASE  (i386)
            built on Wed Sep 11 18:16:50 EDT 2013

            FreeBSD 8.3-RELEASE-p11

            ![Gateway Packet Loss.gif](/public/imported_attachments/1/Gateway Packet Loss.gif)
            ![Gateway Packet Loss.gif_thumb](/public/imported_attachments/1/Gateway Packet Loss.gif_thumb)

            1 Reply Last reply Reply Quote 0
            • johnpozJ
              johnpoz LAYER 8 Global Moderator
              last edited by

              So what is the path that pfsense takes vs the path your other machine you ping the upstream (pfsense gateway) the 2.1 and 1.254 address

              I would assume they are all connected to some common switch - is it possible the switch is having issues moving traffic from pfsense to the gateway vs from this other connected device.

              It just seems strange to me that ping from pfsense to X that response time would point to issue on pfsense.

              pfsense puts ping on wire, there is a response..  Would pfsense having issues modify these times?  Lets say the response comes in .5 seconds (500ms) how would pfsense as it moves it up the stack change that time to 10 ms?

              I could see a delay in the output of the command say, but would it modify the times??

              Also - your pings there in the .03 ms range to 2.1 - you sure you were not pinging yourself??  That is really FAST response even for a LAN..

              An intelligent man is sometimes forced to be drunk to spend time with his fools
              If you get confused: Listen to the Music Play
              Please don't Chat/PM me for help, unless mod related
              SG-4860 24.11 | Lab VMs 2.8, 24.11

              1 Reply Last reply Reply Quote 0
              • B
                bullet92
                last edited by

                @NOYB:

                I have a similar problem.  Though I've never been able to produce it with a manual ping.

                What I see is that the gateway monitor of a local router will suddenly begin having intermittent packet loss after pfSense has be online anywhere from a few days to couple weeks.  Restarting pfSense, with no action taken on the target gateway, fixes it, for awhile.
                […]
                2.1-RELEASE  (i386)
                built on Wed Sep 11 18:16:50 EDT 2013

                FreeBSD 8.3-RELEASE-p11

                I think that you problem isn't a real problem, but virtual: when you have packet loss, your latency is almost good! in 2.1 there is a problem with fake packet loss. look this thread: http://forum.pfsense.org/index.php?topic=66328.0. If you wont to be sure that packet loss is fake, try smokeping utlity on another machine.

                Also - your pings there in the .03 ms range to 2.1 - you sure you were not pinging yourself??  That is really FAST response even for a LAN..

                Yes, sorry, my error with VM ::)
                this is the correct ping

                ping -S 192.168.2.10 -c10 192.168.2.1
                PING 192.168.2.1 (192.168.2.1) 56(84) bytes of data.
                64 bytes from 192.168.2.1: icmp_req=1 ttl=64 time=0.603 ms
                64 bytes from 192.168.2.1: icmp_req=2 ttl=64 time=0.636 ms
                64 bytes from 192.168.2.1: icmp_req=3 ttl=64 time=0.637 ms
                64 bytes from 192.168.2.1: icmp_req=4 ttl=64 time=0.645 ms
                64 bytes from 192.168.2.1: icmp_req=5 ttl=64 time=0.653 ms
                64 bytes from 192.168.2.1: icmp_req=6 ttl=64 time=0.617 ms
                64 bytes from 192.168.2.1: icmp_req=7 ttl=64 time=0.604 ms
                64 bytes from 192.168.2.1: icmp_req=8 ttl=64 time=0.633 ms
                64 bytes from 192.168.2.1: icmp_req=9 ttl=64 time=0.969 ms
                64 bytes from 192.168.2.1: icmp_req=10 ttl=64 time=0.602 ms
                
                --- 192.168.2.1 ping statistics ---
                10 packets transmitted, 10 received, 0% packet loss, time 9001ms
                rtt min/avg/max/mdev = 0.602/0.659/0.969/0.110 ms
                

                My hardware configuration is very simple, that's because i think the problem is pfsense.
                scheme in attachments
                This isn't a simplification, but my real network (wan side) where every arrow represent a cable

                It just seems strange to me that ping from pfsense to X that response time would point to issue on pfsense.

                pfsense puts ping on wire, there is a response..  Would pfsense having issues modify these times?  Lets say the response comes in .5 seconds (500ms) how would pfsense as it moves it up the stack change that time to 10 ms?

                it's strange for mee too, but I think it's the most likely thing. Maybe pfsense increase it's X time to put ping on the wire and consequently latency increase..

                network.jpg
                network.jpg_thumb

                1 Reply Last reply Reply Quote 0
                • B
                  bullet92
                  last edited by

                  hi to all. Today i see improvements.

                  This is my actual load

                  
                                      /0   /1   /2   /3   /4   /5   /6   /7   /8   /9   /10
                       Load Average
                  
                        Interface           Traffic               Peak                Total
                       em1_vlan13  in     12.636 KB/s          1.346 MB/s          946.988 MB
                                   out     9.207 KB/s        101.088 KB/s           69.776 MB
                  
                       em0_vlan10  in      1.307 KB/s         58.705 KB/s           21.818 MB
                                   out     7.329 KB/s          3.202 MB/s          517.675 MB
                  
                              lo0  in      0.065 KB/s          0.464 KB/s           47.049 KB
                                   out     0.065 KB/s          0.464 KB/s           47.049 KB
                  
                              em3  in     22.275 KB/s        151.768 KB/s          181.590 MB
                                   out    39.036 KB/s          1.423 MB/s            2.242 GB
                  
                              em2  in      3.303 KB/s         11.335 KB/s            3.329 MB
                                   out     3.437 KB/s         38.118 KB/s            4.351 MB
                  
                              em1  in     37.779 KB/s          1.889 MB/s            2.460 GB
                                   out    17.342 KB/s        127.957 KB/s          161.994 MB
                  
                              em0  in      4.159 KB/s        141.876 KB/s           34.396 MB
                                   out     6.426 KB/s          1.606 MB/s          267.269 MB
                  
                  

                  i have also modified (yesterday) my sysctl and my loader.conf.local, but probably the benefits that i'm seeing are caused from the very low traffic.

                  /etc/systctl.conf

                  
                  kern.ipc.somaxconn=1024  # (default 128)
                  kern.ipc.maxsockbuf=16777216
                  net.inet.tcp.mssdflt=1460  # (default 536)
                  net.inet.tcp.sendbuf_max=16777216
                  net.inet.tcp.recvbuf_max=16777216
                  
                  net.inet.tcp.sendbuf_inc=262144  # (default 8192 )
                  net.inet.tcp.recvbuf_inc=262144  # (default 16384)
                  
                  net.inet.tcp.cc.algorithm=htcp
                  
                  # Reduce the amount of SYN/ACKs we will re-transmit to an unresponsive client.
                  net.inet.tcp.syncache.rexmtlimit=1  # (default 3)
                  
                  # Lessen max segment life to conserve resources
                  # ACK waiting time in milliseconds
                  # (default: 30000\. RFC from 1979 recommends 120000)
                  net.inet.tcp.msl=5000
                  
                  # As of 15 Apr 2009\. Igor Sysoev says that nolocaltimewait has some buggy implementaion.
                  # So disable it or now till get fixed
                  net.inet.tcp.nolocaltimewait=0
                  
                  # Protocol decoding in interrupt thread.
                  # If you have NIC that automatically sets flow_id then it's better to not
                  # use direct_force, and use advantages of multithreaded netisr(9)
                  # If you have Yandex drives you better off with `net.isr.direct_force=1` and
                  # `net.inet.tcp.read_locking=0` otherwise you may run into some TCP related
                  # problems.
                  # Note: If you have old NIC that don't set flow_ids you may need to
                  # patch `ip_input` to manually set FLOW_ID via `nh_m2flow`.
                  #
                  # FreeBSD 8+
                  net.isr.direct=1 
                  
                  net.isr.direct_force=1 
                  # Explicit Congestion Notification
                  # (See http://en.wikipedia.org/wiki/Explicit_Congestion_Notification)
                  #net.inet.tcp.ecn.enable=1 
                  
                  # Flowtable - flow caching mechanism
                  # Useful for routers
                  net.inet.flowtable.enable=1
                  net.inet.flowtable.nmbflows=65535
                  vm.pmap.shpgperproc=2048
                  
                  net.inet.tcp.recvspace=1024000
                  net.inet.tcp.sendspace=1024000
                  
                  net.inet.ip.forwarding=1      # (default 0)
                  net.inet.ip.fastforwarding=1  # (default 0)
                  
                  # General Security and DoS mitigation.
                  net.inet.ip.check_interface=1         # verify packet arrives on correct interface (default 0)
                  net.inet.ip.portrange.randomized=1    # randomize outgoing upper ports (default 1)
                  net.inet.ip.process_options=0         # IP options in the incoming packets will be ignored (default 1)
                  net.inet.ip.random_id=1               # assign a random IP_ID to each packet leaving the system (default 0)
                  net.inet.ip.redirect=0                # do not send IP redirects (default 1)
                  net.inet.ip.accept_sourceroute=0      # drop source routed packets since they can not be trusted (default 0)
                  net.inet.ip.sourceroute=0             # if source routed packets are accepted the route data is ignored (default 0)
                  net.inet.ip.stealth=1                 # do not reduce the TTL by one(1) when a packets goes through the firewall (default 0)
                  net.inet.icmp.bmcastecho=0            # do not respond to ICMP packets sent to IP broadcast addresses (default 0)
                  net.inet.icmp.maskfake=0              # do not fake reply to ICMP Address Mask Request packets (default 0)
                  net.inet.icmp.maskrepl=0              # replies are not sent for ICMP address mask requests (default 0)
                  net.inet.icmp.log_redirect=0          # do not log redirected ICMP packet attempts (default 0)
                  net.inet.icmp.drop_redirect=1         # no redirected ICMP packets (default 0)
                  net.inet.icmp.icmplim=10              # number of ICMP/RST packets/sec to limit returned packet bursts during a DoS. (default 200)
                  net.inet.icmp.icmplim_output=1        # show "Limiting open port RST response" messages (default 1)
                  net.inet.tcp.drop_synfin=1            # SYN/FIN packets get dropped on initial connection (default 0)
                  net.inet.tcp.ecn.enable=0            # explicit congestion notification (ecn) warning: some ISP routers abuse it (default 0)
                  net.inet.tcp.fast_finwait2_recycle=1  # recycle FIN/WAIT states quickly (helps against DoS, but may cause false RST) (default 0)
                  net.inet.tcp.icmp_may_rst=0           # icmp may not send RST to avoid spoofed icmp/udp floods (default 1)
                  #net.inet.tcp.maxtcptw=15000          # max number of tcp time_wait states for closing connections (default 5120)
                  net.inet.tcp.msl=3000                 # 3s maximum segment life waiting for an ACK in reply to a SYN-ACK or FIN-ACK (default 30000)
                  net.inet.tcp.path_mtu_discovery=0     # disable MTU discovery since most ICMP type 3 packets are dropped by others (default 1)
                  net.inet.tcp.rfc3042=0                # disable limited transmit mechanism which can slow burst transmissions (default 1)
                  net.inet.tcp.sack.enable=1            # TCP Selective Acknowledgments are needed for high throughput (default 1)
                  net.inet.udp.blackhole=1              # drop udp packets destined for closed sockets (default 0)
                  net.inet.tcp.blackhole=2              # drop tcp packets destined for closed ports (default 0)
                  #net.route.netisr_maxqlen=4096        # route queue length (rtsock using "netstat -Q") (default 256)
                  security.bsd.see_other_uids=0         # only allow users to see their own processes. root can see all (default 1)
                  

                  /boot/loader.conf.local

                  
                  legal.intel_wpi.license_ack=1 #accetta la licenza intel
                  legal.intel_ipw.license_ack=1
                  
                  aio_load="YES"                     # Async IO system calls
                  autoboot_delay="3"                 # reduce boot menu delay from 10 to 3 seconds. 
                  
                  cc_htcp_load="YES" 
                  
                  kern.ipc.nmbclusters="262144"
                  kern.ipc.somaxconn="4096"
                  kern.ipc.maxsockets="204800"
                  
                  hw.em.rxd="4096"
                  hw.em.txd="4096"
                  hw.em.fc_setting="0"
                  hw.em.num_queues="4"
                  
                  kern.sched.slice="1"
                  
                  # inizio nuovo
                  
                  # Some useful netisr tunables. See sysctl net.isr
                  net.isr.maxthreads=4
                  net.isr.defaultqlimit=10240
                  net.isr.maxqlimit=10240
                  # Bind netisr threads to CPUs
                  net.isr.bindthreads=1
                  
                  # Also for my notebook, but may be used with Opteron
                  #device         amdtemp
                  # Same for Intel processors
                  device         coretemp
                  

                  and this is the ping from Pfsense

                  PING 192.168.2.1 (192.168.2.1) from 192.168.2.5: 56 data bytes
                  64 bytes from 192.168.2.1: icmp_seq=0 ttl=64 time=0.203 ms
                  64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.149 ms
                  64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=0.142 ms
                  64 bytes from 192.168.2.1: icmp_seq=3 ttl=64 time=20.249 ms
                  64 bytes from 192.168.2.1: icmp_seq=4 ttl=64 time=1.627 ms
                  64 bytes from 192.168.2.1: icmp_seq=5 ttl=64 time=0.177 ms
                  64 bytes from 192.168.2.1: icmp_seq=6 ttl=64 time=0.158 ms
                  64 bytes from 192.168.2.1: icmp_seq=7 ttl=64 time=0.101 ms
                  64 bytes from 192.168.2.1: icmp_seq=8 ttl=64 time=0.219 ms
                  64 bytes from 192.168.2.1: icmp_seq=9 ttl=64 time=0.149 ms
                  
                  --- 192.168.2.1 ping statistics ---
                  10 packets transmitted, 10 packets received, 0.0% packet loss
                  round-trip min/avg/max/stddev = 0.101/2.317/20.249/5.993 ms
                  
                  PING 192.168.1.254 (192.168.1.254) from 192.168.1.130: 56 data bytes
                  64 bytes from 192.168.1.254: icmp_seq=0 ttl=64 time=0.858 ms
                  64 bytes from 192.168.1.254: icmp_seq=1 ttl=64 time=0.821 ms
                  64 bytes from 192.168.1.254: icmp_seq=2 ttl=64 time=0.686 ms
                  64 bytes from 192.168.1.254: icmp_seq=3 ttl=64 time=0.805 ms
                  64 bytes from 192.168.1.254: icmp_seq=4 ttl=64 time=0.672 ms
                  64 bytes from 192.168.1.254: icmp_seq=5 ttl=64 time=0.667 ms
                  64 bytes from 192.168.1.254: icmp_seq=6 ttl=64 time=1.909 ms
                  64 bytes from 192.168.1.254: icmp_seq=7 ttl=64 time=6.646 ms
                  64 bytes from 192.168.1.254: icmp_seq=8 ttl=64 time=0.638 ms
                  64 bytes from 192.168.1.254: icmp_seq=9 ttl=64 time=0.767 ms
                  
                  --- 192.168.1.254 ping statistics ---
                  10 packets transmitted, 10 packets received, 0.0% packet loss
                  round-trip min/avg/max/stddev = 0.638/1.447/6.646/1.769 ms
                  

                  We will inform you of what is happening with more load Monday.

                  1 Reply Last reply Reply Quote 0
                  • johnpozJ
                    johnpoz LAYER 8 Global Moderator
                    last edited by

                    So this is physical wires and devices involved anywhere in this setup - or is this all virtual networks and vms?

                    Sorry but even 2 boxes connected together with a wire..  These just seem to low.

                    64 bytes from 192.168.2.1: icmp_seq=0 ttl=64 time=0.203 ms
                    64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.149 ms
                    64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=0.142 ms

                    .15 to .2 ms is freakishly FAST..  And then bounces to 20ms ??
                    64 bytes from 192.168.2.1: icmp_seq=3 ttl=64 time=20.249 ms

                    This seems more realistic for normal lan pings - me pinging box on my network.
                    From 192.168.1.99: bytes=60 seq=0001 TTL=64 ID=54ef time=0.494ms
                    From 192.168.1.99: bytes=60 seq=0002 TTL=64 ID=54f0 time=0.415ms
                    From 192.168.1.99: bytes=60 seq=0003 TTL=64 ID=54f1 time=0.407ms
                    From 192.168.1.99: bytes=60 seq=0004 TTL=64 ID=54f2 time=0.404ms

                    Ok low 3's – but sub .2  -- I don't think I have ever seen such speeds.

                    64 bytes from 192.168.2.1: icmp_seq=7 ttl=64 time=0.101 ms

                    You sure your not pinging your own IP address again? ;)

                    Here is my box pinging itself
                    From 192.168.1.100: bytes=60 seq=0001 TTL=128 ID=29d0 time=0.122ms
                    From 192.168.1.100: bytes=60 seq=0002 TTL=128 ID=29d2 time=0.159ms
                    From 192.168.1.100: bytes=60 seq=0003 TTL=128 ID=29d4 time=0.144ms
                    From 192.168.1.100: bytes=60 seq=0004 TTL=128 ID=29d6 time=0.142ms

                    Now sure I can understand those speeds pinging your own IP.

                    So here is question for you - are you having actual operational issues with actual applications having issues with packet loss.. Or are you just seeing weird stuff when your pinging?

                    An intelligent man is sometimes forced to be drunk to spend time with his fools
                    If you get confused: Listen to the Music Play
                    Please don't Chat/PM me for help, unless mod related
                    SG-4860 24.11 | Lab VMs 2.8, 24.11

                    1 Reply Last reply Reply Quote 0
                    • B
                      bullet92
                      last edited by

                      So this is physical wires and devices involved anywhere in this setup - or is this all virtual networks and vms?

                      I've used VM (virtual box on windows) only for testing ping from another linux box(i hate windows ping), my pFsense configuration is hardware!

                      64 bytes from 192.168.2.1: icmp_seq=0 ttl=64 time=0.203 ms
                      64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.149 ms
                      64 bytes from 192.168.2.1: icmp_seq=2 ttl=64 time=0.142 ms

                      .15 to .2 ms is freakishly FAST..  And then bounces to 20ms ??
                      64 bytes from 192.168.2.1: icmp_seq=3 ttl=64 time=20.249 ms

                      You finally hit the problem!! ;D How is it possible?!  :o
                      I don't know why is so fast, but this is a production server, with good cable (cat 5e, 6) and decent switch (but not excellent).

                      Ok low 3's – but sub .2  -- I don't think I have ever seen such speeds.

                      2.1 box is a zeroshell router, before putting it i haven't this low ping.

                      You sure your not pinging your own IP address again? ;)

                      I'm sure because of this:

                      PING 192.168.2.1 (192.168.2.1) from 192.168.2.5: 56 data bytes
                      64 bytes from 192.168.2.1: icmp_seq=0 ttl=64 time=0.203 ms
                      64 bytes from 192.168.2.1: icmp_seq=1 ttl=64 time=0.149 ms

                      and:

                      Interface configuration:

                       WAN (wan)       -> em1        -> v4: 192.168.1.130/24
                       DMZ (lan)       -> em0        -> v4: 172.16.30.5/24
                       WAN2 (opt1)     -> em1_vlan13 -> v4: 192.168.2.5/24
                       WIBRI (opt2)    -> em3        -> v4: 192.168.168.5/24
                       SEDE (opt3)     -> em0_vlan10 -> v4: 192.168.132.5/24
                       WAN3 (opt4)     -> em2        -> v4: 217.xx.xx.30/27
                      

                      So here is question for you - are you having actual operational issues with actual applications having issues with packet loss.. Or are you just seeing weird stuff when your pinging?

                      I've started to monitoring my ping because of issue with VOIP, that is more suscettibile of packet loss and high latency. I've got a lot of drop calls or very low calling quality.
                      Then i've PHYSICAL switch WAN3 (where the VOIP go out) from em0 (em0, em1 are on the same physical network, a dual port) to em2 (motherboard's NIC ) AND i've moved other traffic that exit from wan3 to wan1, SO this network now work good. At this point i've thinked that i've a broken or not properly working NIC, but i've tried another hardware configuration, but the issue persists. So i've tried, on my production server, to put all the wan on the NIC that i was sure working good: em3. I've created and configured 2 wan on 2 vlan + 1 wan without it: DISASTER  :o ALL the wan had the same problem, even worse. I've tried also to change switch!

                      With my actual configuration i haven't problem with VOIP, but the internet navigation is worse and slower because of this lag/packet loss.

                      PS. thanks for your interest  :D

                      1 Reply Last reply Reply Quote 0
                      • johnpozJ
                        johnpoz LAYER 8 Global Moderator
                        last edited by

                        And what macs are you seeing on those IPs from arp -a on 2.5 pinging 2.1

                        I will do some testing at work from highend cisco switch connected to another highend cisco switch..  Its just sub <.2ms just seems like one screaming LAN or your just pinging yourself..

                        And your sub .2 and then out of the blue 20ms – then next ping back to sub .2, that seems just not right.

                        An intelligent man is sometimes forced to be drunk to spend time with his fools
                        If you get confused: Listen to the Music Play
                        Please don't Chat/PM me for help, unless mod related
                        SG-4860 24.11 | Lab VMs 2.8, 24.11

                        1 Reply Last reply Reply Quote 0
                        • stephenw10S
                          stephenw10 Netgate Administrator
                          last edited by

                          Those ping times look OK to me, aside from the sudden jump to 20ms.
                          My test pfSense box is setup behind my home pfSense box connected directly by a 0.5m cat5e cable. The 'normal' ping responce is <0.2ms. See attached RRD graph.
                          That is attached to some bridged ports on the pfSense box also so I would expect that to add some time.

                          Though your ping times are still lower than mine and you are using a switch.

                          In your diagram above you seem to have two boxes labelled 192.168.1.254. Typo? Just indicating the subnet? My not understanding your diagram?

                          Steve

                          status_rrd_graph_img.png
                          status_rrd_graph_img.png_thumb

                          1 Reply Last reply Reply Quote 0
                          • D
                            dreamslacker
                            last edited by

                            @johnpoz:

                            And what macs are you seeing on those IPs from arp -a on 2.5 pinging 2.1

                            I will do some testing at work from highend cisco switch connected to another highend cisco switch..  Its just sub <.2ms just seems like one screaming LAN or your just pinging yourself..

                            And your sub .2 and then out of the blue 20ms – then next ping back to sub .2, that seems just not right.

                            For short cable runs on gigabit, that is rather normal.  I usually run ping tests after setting up structured cabling (loss packets are more indicative of issues than throughput tests that vary largely).  Sub .3ms is very normal even for longer runs (>50m).  The varying factor is usually the load on the end point systems.
                            e.g.
                            This is a ping test for a Gigabit connected access point that is currently actively streaming HD videos over WLAN:

                            --- 192.168.0.11 ping statistics ---
                            5 packets transmitted, 5 packets received, 0.0% packet loss
                            round-trip min/avg/max/stddev = 0.463/4.956/9.644/3.571 ms
                            

                            This for a 10/100 connected access point that is currently inactive (no clients connected):

                            --- 192.168.0.10 ping statistics ---
                            5 packets transmitted, 5 packets received, 0.0% packet loss
                            round-trip min/avg/max/stddev = 0.257/0.275/0.327/0.026 ms
                            

                            And this is a ping test for my Gigabit connected PC that's hardly doing much other than streaming a youtube video or two:

                            --- 192.168.0.2 ping statistics ---
                            5 packets transmitted, 5 packets received, 0.0% packet loss
                            round-trip min/avg/max/stddev = 0.247/0.262/0.312/0.025 ms
                            
                            1 Reply Last reply Reply Quote 0
                            • stephenw10S
                              stephenw10 Netgate Administrator
                              last edited by

                              Forgot to say that my boxes above are both using fxp 10/100 NICs.  ;)

                              Steve

                              1 Reply Last reply Reply Quote 0
                              • D
                                dreamslacker
                                last edited by

                                It's really normal to see <0.5ms pings on a local network run (even through dumb switches).  My point being that the latencies will vary largely based on the end point devices and their load.  A short spike may just indicate that the end device is under load at the time.

                                In this case, the devices the OP is pinging may simply be under load at the time (since they are technically routers doing their job).

                                1 Reply Last reply Reply Quote 0
                                • B
                                  bullet92
                                  last edited by

                                  @dreamslacker:

                                  My point being that the latencies will vary largely based on the end point devices and their load.  A short spike may just indicate that the end device is under load at the time.
                                  In this case, the devices the OP is pinging may simply be under load at the time (since they are technically routers doing their job).

                                  It might seem correct, but if it were i should saturate 100mbit or the end point devices should have 100% or something like cpu using. This is impossible also because when i have this latency from pfsense if i ping from another box the same router at the SAME TIME, i get a fast and stable ping time.

                                  @stephenw10:

                                  In your diagram above you seem to have two boxes labelled 192.168.1.254. Typo? Just indicating the subnet? My not understanding your diagram?

                                  Sry, my error, em1: 192.168.1.130/24, not 192.168.1.254.

                                  1 Reply Last reply Reply Quote 0
                                  • D
                                    dreamslacker
                                    last edited by

                                    @bullet92:

                                    It might seem correct, but if it were i should saturate 100mbit or the end point devices should have 100% or something like cpu using. This is impossible also because when i have this latency from pfsense if i ping from another box the same router at the SAME TIME, i get a fast and stable ping time.

                                    Is this machine on the same network as pfSense?

                                    Presumably, pfSense is pinging the said router on its 'LAN' or 'DMZ' interface.  Is the machine you're using also attached to the same interface or a different interface?

                                    BTW, do you have traffic shaping or QOS enabled on pfSense or the target router(s)?

                                    1 Reply Last reply Reply Quote 0
                                    • johnpozJ
                                      johnpoz LAYER 8 Global Moderator
                                      last edited by

                                      "It's really normal to see <0.5ms pings on a local network run"

                                      Agreed..  .4 to .5 very common I see this all the time in the lan and expect it..  Is just .1 to .2, I don't see that – I wouldn't call our switches over worked or anything but only time I recall seeing such low numbers is pinging local..

                                      When at work trmw going to ping around the datacenter seeing what kind of low times I can find ;)

                                      An intelligent man is sometimes forced to be drunk to spend time with his fools
                                      If you get confused: Listen to the Music Play
                                      Please don't Chat/PM me for help, unless mod related
                                      SG-4860 24.11 | Lab VMs 2.8, 24.11

                                      1 Reply Last reply Reply Quote 0
                                      • B
                                        bullet92
                                        last edited by

                                        @dreamslacker:

                                        Is this machine on the same network as pfSense?
                                        Presumably, pfSense is pinging the said router on its 'LAN' or 'DMZ' interface.  Is the machine you're using also attached to the same interface or a different interface?
                                        BTW, do you have traffic shaping or QOS enabled on pfSense or the target router(s)?

                                        Yes, i've putted this machine on the same switch ( so the same router interface ) of pfSense. Yes, traffis shaping is currently enabled on pfSense, disabled on the target routers.

                                        1 Reply Last reply Reply Quote 0
                                        • D
                                          dreamslacker
                                          last edited by

                                          @bullet92:

                                          @dreamslacker:

                                          Is this machine on the same network as pfSense?
                                          Presumably, pfSense is pinging the said router on its 'LAN' or 'DMZ' interface.  Is the machine you're using also attached to the same interface or a different interface?
                                          BTW, do you have traffic shaping or QOS enabled on pfSense or the target router(s)?

                                          Yes, i've putted this machine on the same switch ( so the same router interface ) of pfSense. Yes, traffis shaping is currently enabled on pfSense, disabled on the target routers.

                                          Try prioritizing icmp using the floating rules and see what you get. In practical terms, it does little but if it works then you know what to do to get the best effect on your setup.

                                          1 Reply Last reply Reply Quote 0
                                          • D
                                            dreamslacker
                                            last edited by

                                            @johnpoz: Don't bother to do so on my account. Lol. I do see 0.2 ms roundtrip on wired connections now and then during line testing but this is with fully idle systems. I.e. Idling system ping to smart switch on the other end without and other connected devices.
                                            Nevertheless, a lightly loaded system should still give 0.4-0.5 ms pings.

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.