Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    High (game) latency when AltQ enabled and traffic >40Mbit

    Scheduled Pinned Locked Moved Traffic Shaping
    7 Posts 2 Posters 6.6k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • H
      hexa
      last edited by

      Hi,
      we have 200mbit line. 2 pfsense nodes in active/passive with carp. If usage is below 40Mbit/s with AltQ enabled with default ruleset for games the latency is OK and queues are filled as expected. If the usage of the line goes above 60Mbit latency/pings start to fluctuate and soon games become unplayable (>150ms lag). If i artificially load the link even more to let's say around 100Mbit the latency goes out the roof. If i disable AltQ, at that moment, my ping comes down back to 7ms! As long as we don't come near 200Mbit there is no latency when AltQ is disabled. Also trough-put of whole line drops substantially when AltQ is enabled. Can any1 help me with removing this lag when using traffic shaper? I wouldn't ming sacrifice some BW to have better latency for games.

      1 Reply Last reply Reply Quote 0
      • E
        eri--
        last edited by

        If this is 1.2+ you cannot do a thing from the gui, sorry.

        1 Reply Last reply Reply Quote 0
        • H
          hexa
          last edited by

          Is this the case for both of my problems?
          I'm aware i cannot fix "bandwidth loss" when using AltQ in 1.2, but what about the latencies?

          What version should satisfy my needs? Maybe i can test it?

          1 Reply Last reply Reply Quote 0
          • E
            eri--
            last edited by

            2.0 and you have to tweak you tbr and queue length to have better response.

            1 Reply Last reply Reply Quote 0
            • H
              hexa
              last edited by

              OK, so bad latency is only doe to queue length? What if i manually increase queue length on 1.2?

              I'll start testing 2.0 soon.

              1 Reply Last reply Reply Quote 0
              • E
                eri--
                last edited by

                Its not only about queue length.
                Read this extracted from http://www.sonycsl.co.jp/~kjc/software/altq-new-design.txt
                It is very low level detail but still prevails with the tbr option you can configure on 2.0

                
                2.5 tokenbucket regulator
                
                The purpose of the token bucket regulator is to limit the amount of
                packets that a driver can dequeue.
                A token bucket has "rate" and "size".  Tokens accumulate in a bucket
                at the average "rate", up to the bucket "size".
                A driver can dequeue a packet as long as there are positive tokens,
                and after a packet is dequeued, the size of the packet is subtracted
                from the tokens.
                (note that this implementation allows the token to be negative as a
                deficit, and differs from a typical token bucket that compares the
                packet size with the remaining tokens beforehand.)
                
                It is important to understand the roles of "rate" and "size".
                The bucket size controls the amount of burst that can dequeued at a
                time, and controls a greedy device trying dequeue packets as much as
                possible.  This is the primary purpose of the token bucket regulator
                in ALTQ.  Thus, the rate should be set to the wire speed.  (even if
                the rate is set to a larger value, it does not matter much since our
                focus is excessive bursts.)
                
                On the other hand, if the rate is set to a smaller value than the wire
                speed, the token bucket regulator becomes a shaper that limits the
                long-term output rate.
                Another important point is that, when the rate is set to more than the
                actual transfer speed, tx complete interrupts can trigger the next
                dequeue.  However, if the rate is smaller, the rate limit would be
                still in effect at the tx complete interrupt, and the rate limiting
                falls back to the kernel timer to trigger the next dequeue.  In order
                to achieve the target rate under timer-driven rate limiting, the
                bucket size should be increased to fill the timer interval.
                
                

                From pf manual page

                
                     tbrsize <size>Adjusts the size, in bytes, of the token bucket regulator.  If not
                	   specified, heuristics based on the interface bandwidth are used to
                	   determine the size.</size> 
                
                1 Reply Last reply Reply Quote 0
                • H
                  hexa
                  last edited by

                  Thank you for info.

                  1 Reply Last reply Reply Quote 0
                  • First post
                    Last post
                  Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.