• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

Latency "counts down" and then spikes when I create rules with limiters

Scheduled Pinned Locked Moved Traffic Shaping
13 Posts 3 Posters 1.8k Views
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • S
    spcolyvas
    last edited by Nov 1, 2016, 6:53 PM

    I'm getting "cycling" in the latency metrics of my pfsense - any help would be much appreciated…

    I've created some limiters that limit bandwidth and specify queue size. Then I create a rule that uses them, I have a limiter for both incoming and outgoing traffic.  I find that ping latency starts high and then slowly comes down as it is counting down it eventually spikes back up to the original latency  - like this  90ms,87ms,86ms,82ms....20ms, 90ms, 85ms, 84ms ...  and it just keeps cycling.

    I have a test bed created to evaluate pfsense - it is an aws server (the client) pointing only to an aws instance of pfsense which points to another aws instance (the server).  I use iperf3 to create a load from the client through pfsense to the server and I open a second console on the client to ping the server (to watch the latency).  Here are the commands:

    Client:
    iperf3 -c 10.30.36.83  -i 1  -b500k  (bandwidth  value changes depending on the test)  -p 8000 -u  -l 220  (read/write buf changes too)  -t 600

    Server:
    iperf3 -s -i1 -p8000

    then I just ping the server, I get this behavior - notice how the latency cycles:

    64 bytes from 10.30.36.83: icmp_seq=1 ttl=254 time=44.0 ms
    64 bytes from 10.30.36.83: icmp_seq=2 ttl=254 time=42.9 ms
    64 bytes from 10.30.36.83: icmp_seq=3 ttl=254 time=40.8 ms
    64 bytes from 10.30.36.83: icmp_seq=4 ttl=254 time=38.9 ms
    64 bytes from 10.30.36.83: icmp_seq=5 ttl=254 time=37.9 ms
    64 bytes from 10.30.36.83: icmp_seq=6 ttl=254 time=35.9 ms
    64 bytes from 10.30.36.83: icmp_seq=7 ttl=254 time=33.8 ms
    64 bytes from 10.30.36.83: icmp_seq=8 ttl=254 time=31.9 ms
    64 bytes from 10.30.36.83: icmp_seq=9 ttl=254 time=29.9 ms
    64 bytes from 10.30.36.83: icmp_seq=10 ttl=254 time=28.8 ms
    64 bytes from 10.30.36.83: icmp_seq=11 ttl=254 time=26.9 ms
    64 bytes from 10.30.36.83: icmp_seq=12 ttl=254 time=24.8 ms
    64 bytes from 10.30.36.83: icmp_seq=13 ttl=254 time=22.9 ms
    64 bytes from 10.30.36.83: icmp_seq=14 ttl=254 time=20.8 ms
    64 bytes from 10.30.36.83: icmp_seq=15 ttl=254 time=18.8 ms
    64 bytes from 10.30.36.83: icmp_seq=16 ttl=254 time=16.8 ms
    64 bytes from 10.30.36.83: icmp_seq=17 ttl=254 time=14.9 ms
    64 bytes from 10.30.36.83: icmp_seq=18 ttl=254 time=12.8 ms
    64 bytes from 10.30.36.83: icmp_seq=19 ttl=254 time=10.8 ms
    64 bytes from 10.30.36.83: icmp_seq=20 ttl=254 time=8.91 ms
    64 bytes from 10.30.36.83: icmp_seq=21 ttl=254 time=6.95 ms
    64 bytes from 10.30.36.83: icmp_seq=22 ttl=254 time=5.90 ms
    64 bytes from 10.30.36.83: icmp_seq=23 ttl=254 time=5.87 ms
    64 bytes from 10.30.36.83: icmp_seq=24 ttl=254 time=5.83 ms
    64 bytes from 10.30.36.83: icmp_seq=25 ttl=254 time=5.88 ms
    64 bytes from 10.30.36.83: icmp_seq=26 ttl=254 time=5.88 ms
    64 bytes from 10.30.36.83: icmp_seq=27 ttl=254 time=5.75 ms
    64 bytes from 10.30.36.83: icmp_seq=28 ttl=254 time=5.98 ms
    64 bytes from 10.30.36.83: icmp_seq=29 ttl=254 time=5.83 ms
    64 bytes from 10.30.36.83: icmp_seq=30 ttl=254 time=5.88 ms
    64 bytes from 10.30.36.83: icmp_seq=31 ttl=254 time=22.7 ms
    64 bytes from 10.30.36.83: icmp_seq=34 ttl=254 time=70.8 ms
    64 bytes from 10.30.36.83: icmp_seq=35 ttl=254 time=68.8 ms
    64 bytes from 10.30.36.83: icmp_seq=36 ttl=254 time=66.9 ms
    64 bytes from 10.30.36.83: icmp_seq=37 ttl=254 time=65.9 ms
    64 bytes from 10.30.36.83: icmp_seq=38 ttl=254 time=63.8 ms
    64 bytes from 10.30.36.83: icmp_seq=39 ttl=254 time=62.0 ms
    64 bytes from 10.30.36.83: icmp_seq=40 ttl=254 time=60.9 ms
    64 bytes from 10.30.36.83: icmp_seq=41 ttl=254 time=58.9 ms
    64 bytes from 10.30.36.83: icmp_seq=42 ttl=254 time=57.9 ms
    64 bytes from 10.30.36.83: icmp_seq=43 ttl=254 time=55.9 ms
    64 bytes from 10.30.36.83: icmp_seq=44 ttl=254 time=53.8 ms
    64 bytes from 10.30.36.83: icmp_seq=45 ttl=254 time=51.9 ms
    64 bytes from 10.30.36.83: icmp_seq=46 ttl=254 time=49.9 ms
    64 bytes from 10.30.36.83: icmp_seq=47 ttl=254 time=48.8 ms
    64 bytes from 10.30.36.83: icmp_seq=48 ttl=254 time=46.8 ms
    64 bytes from 10.30.36.83: icmp_seq=49 ttl=254 time=44.9 ms
    64 bytes from 10.30.36.83: icmp_seq=50 ttl=254 time=42.9 ms
    64 bytes from 10.30.36.83: icmp_seq=51 ttl=254 time=40.8 ms
    64 bytes from 10.30.36.83: icmp_seq=52 ttl=254 time=38.8 ms
    64 bytes from 10.30.36.83: icmp_seq=53 ttl=254 time=37.0 ms
    64 bytes from 10.30.36.83: icmp_seq=54 ttl=254 time=35.8 ms
    64 bytes from 10.30.36.83: icmp_seq=55 ttl=254 time=33.8 ms
    64 bytes from 10.30.36.83: icmp_seq=56 ttl=254 time=31.9 ms
    64 bytes from 10.30.36.83: icmp_seq=57 ttl=254 time=29.8 ms
    64 bytes from 10.30.36.83: icmp_seq=58 ttl=254 time=27.9 ms
    64 bytes from 10.30.36.83: icmp_seq=59 ttl=254 time=25.9 ms
    64 bytes from 10.30.36.83: icmp_seq=60 ttl=254 time=24.8 ms
    64 bytes from 10.30.36.83: icmp_seq=61 ttl=254 time=22.8 ms
    64 bytes from 10.30.36.83: icmp_seq=62 ttl=254 time=20.8 ms
    64 bytes from 10.30.36.83: icmp_seq=63 ttl=254 time=18.8 ms
    64 bytes from 10.30.36.83: icmp_seq=64 ttl=254 time=16.8 ms
    64 bytes from 10.30.36.83: icmp_seq=65 ttl=254 time=14.9 ms
    64 bytes from 10.30.36.83: icmp_seq=66 ttl=254 time=13.8 ms
    64 bytes from 10.30.36.83: icmp_seq=67 ttl=254 time=11.8 ms
    64 bytes from 10.30.36.83: icmp_seq=68 ttl=254 time=9.98 ms
    64 bytes from 10.30.36.83: icmp_seq=69 ttl=254 time=8.86 ms
    64 bytes from 10.30.36.83: icmp_seq=70 ttl=254 time=6.90 ms

    1 Reply Last reply Reply Quote 0
    • N
      Nullity
      last edited by Nov 1, 2016, 7:06 PM

      Have you tried tuning the limiter's queue length?
      What bitrate is the limiter? On low-bandwidth links an MTU-sized packet can cause a large latency fluctuation.

      If latency is a primary concern, you likely want to use CoDel, which is part of the traffic-shaping queue config. Eventually, codel will be part of limiters, but not yet.

      Please correct any obvious misinformation in my posts.
      -Not a professional; an arrogant ignoramous.

      1 Reply Last reply Reply Quote 0
      • S
        spcolyvas
        last edited by Nov 1, 2016, 8:49 PM

        Yes at lower bitrates there is higher latency and adjusting the Queue size helps but in any case I get this "count down" behavior.  Has anyone else seen this?  In the example I was limiting the bitrate to 500kbps and setting the bandwidth to 500kbps on iperf.  The Queue size was the default (40 slots).

        1 Reply Last reply Reply Quote 0
        • K
          KOM
          last edited by Nov 1, 2016, 8:51 PM

          It may just be an artifact of the limiting algorithm.

          1 Reply Last reply Reply Quote 0
          • S
            spcolyvas
            last edited by Nov 1, 2016, 8:53 PM

            Can I use rules and limiters when Codel is enabled?

            1 Reply Last reply Reply Quote 0
            • N
              Nullity
              last edited by Nov 1, 2016, 9:23 PM

              @spcolyvas:

              Yes at lower bitrates there is higher latency and adjusting the Queue size helps but in any case I get this "count down" behavior.  Has anyone else seen this?  In the example I was limiting the bitrate to 500kbps and setting the bandwidth to 500kbps on iperf.  The Queue size was the default (40 slots).

              FYI, a 500Kbit link will take 24 milliseconds to send a 1500 byte (12000 bit), MTU-sized packet.

              Please correct any obvious misinformation in my posts.
              -Not a professional; an arrogant ignoramous.

              1 Reply Last reply Reply Quote 0
              • N
                Nullity
                last edited by Nov 1, 2016, 9:26 PM

                @spcolyvas:

                Can I use rules and limiters when Codel is enabled?

                No. Well, maybe, but it's not advised.

                Why do you want to use limiters instead of traffic-shaping queues anyway?

                Please correct any obvious misinformation in my posts.
                -Not a professional; an arrogant ignoramous.

                1 Reply Last reply Reply Quote 0
                • S
                  spcolyvas
                  last edited by Nov 1, 2016, 11:24 PM Nov 1, 2016, 10:55 PM

                  I'm new to this.  I need to create combinations of the following - can I do this with traffic shaping queues?
                  1.  configurable max bandwidth
                  2.  configurable latency
                  3.  configurable packet loss

                  BTW this will take UDP traffic mostly

                  1 Reply Last reply Reply Quote 0
                  • N
                    Nullity
                    last edited by Nov 2, 2016, 12:02 AM

                    @spcolyvas:

                    I'm new to this.  I need to create combinations of the following - can I do this with traffic shaping queues?
                    1.  configurable max bandwidth
                    2.  configurable latency
                    3.  configurable packet loss

                    BTW this will take UDP traffic mostly

                    1 & 2 can be accomplished with queues (HFSC,  I know can do both those things) & limiters, but 3 can only be accomplished by limiters. Limiters are actually FreeBSD's "dummynet" which was initially created for the purpose of network testing, which is why it is capable of things like forcing packet loss.

                    Can you give more details about exactly what you are trying to do?

                    Please correct any obvious misinformation in my posts.
                    -Not a professional; an arrogant ignoramous.

                    1 Reply Last reply Reply Quote 0
                    • N
                      Nullity
                      last edited by Nov 2, 2016, 12:09 AM

                      Also, were thing ICMP ping packets going through the same limiter?

                      Please correct any obvious misinformation in my posts.
                      -Not a professional; an arrogant ignoramous.

                      1 Reply Last reply Reply Quote 0
                      • N
                        Nullity
                        last edited by Nov 2, 2016, 9:24 AM

                        Hmm, you also lost a few packets right when the ping spiked (32 & 33). That's a bit strange.

                        Please correct any obvious misinformation in my posts.
                        -Not a professional; an arrogant ignoramous.

                        1 Reply Last reply Reply Quote 0
                        • S
                          spcolyvas
                          last edited by Nov 3, 2016, 12:13 AM

                          Thanks Nullity,

                          ICMP packets do go through the same limiter.  Eventually I'll be pumping video through these limiters to see how the video client and server adapts to the bandwidth constraints, latency, packet loss etc.  the client/server should do stuff like adjust the framerate, resolution etc.  The problem is that the behavior that pfsense is showing starting with 40ms latency and counting down to 6ms latency will really mix it up.  it may be a good test but I'd like to run other tests as well.

                          1 Reply Last reply Reply Quote 0
                          • N
                            Nullity
                            last edited by Nov 3, 2016, 1:21 AM

                            @spcolyvas:

                            Thanks Nullity,

                            ICMP packets do go through the same limiter.  Eventually I'll be pumping video through these limiters to see how the video client and server adapts to the bandwidth constraints, latency, packet loss etc.  the client/server should do stuff like adjust the framerate, resolution etc.  The problem is that the behavior that pfsense is showing starting with 40ms latency and counting down to 6ms latency will really mix it up.  it may be a good test but I'd like to run other tests as well.

                            For testing, use limiters, sure. Limiters, AFAIK, make no worst-case latency guarantees.

                            but for actual deployment of video/audio services use HFSC, optionally with "CoDel Active Queue" enabled. I'd at least test your scenerio with HFSC to see your latency fluctuation is being caused by limiters or something else.

                            I dunno. Without more details it's hard to even know where to begin. Maybe iperf is queueing packets in bursts… maybe... ? More tests are in order. :)

                            Please correct any obvious misinformation in my posts.
                            -Not a professional; an arrogant ignoramous.

                            1 Reply Last reply Reply Quote 0
                            13 out of 13
                            • First post
                              13/13
                              Last post
                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
                              This community forum collects and processes your personal information.
                              consent.not_received