Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    NeXusLAN Party Day 1 RRD Grapsh

    Scheduled Pinned Locked Moved Traffic Shaping
    20 Posts 5 Posters 3.8k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      sideout
      last edited by

      Queue page for you as well.

      NexuLANQueuesDay1.JPG
      NexuLANQueuesDay1.JPG_thumb

      1 Reply Last reply Reply Quote 0
      • N
        Nullity
        last edited by

        You badass! but wtf is a RDD grapsh?  :P

        HFSC trick: On delay-insensitive queues use link-share [[b]m1=0,d=250,m2=<whatever "bandwidth"="" is=""></whatever>] on the queue and it will allow packets of that queue to be delayed up to 250ms (choose whatever worst-case delay you want). Since it is link-share it will only delay as long as it needs to.

        You can use it to give downloads high bandwidth but huge delay, leaving low-delay opportunities for game/whatever packets. 500ms is probably a safe limit for TCP.

        Like, m1=0,d=250,m2=50Mb would give the queue a 50Mb average bitrate with a worst-case per-packet delay of 250ms. Delay = time between HFSC receiving a packet's last bit (from system) and HFSC transmitting the packet's last bit.

        Please correct any obvious misinformation in my posts.
        -Not a professional; an arrogant ignoramous.

        1 Reply Last reply Reply Quote 0
        • S
          sideout
          last edited by

          Yes sorry . I had greasy pizza fingers while typing.

          1 Reply Last reply Reply Quote 0
          • S
            sideout
            last edited by

            PRTG Graphs

            NexusLANTableTraffic.JPG
            NexusLANTableTraffic.JPG_thumb
            NexusLANPFSENSENIC.JPG
            NexusLANPFSENSENIC.JPG_thumb

            1 Reply Last reply Reply Quote 0
            • S
              sideout
              last edited by

              Sunday Charts. Also some were complaining of some "packet loss" with League of Legends.  Ran their trace program and here is snip of those results.

              NexusLANPRTGTRAFFIC.JPG
              NexusLANPRTGTRAFFIC.JPG_thumb
              NexusLANTrafficSunday.JPG
              NexusLANTrafficSunday.JPG_thumb
              NexuslanPacketsSunday.JPG
              NexuslanPacketsSunday.JPG_thumb
              NexusLANQUEUESUNDAY.JPG
              NexusLANQUEUESUNDAY.JPG_thumb
              LoLNetworkTraceSunday.JPG
              LoLNetworkTraceSunday.JPG_thumb

              1 Reply Last reply Reply Quote 0
              • M
                mcwtim
                last edited by

                What all do you have PRTG setup to monitor sideout?

                1 Reply Last reply Reply Quote 0
                • S
                  sideout
                  last edited by

                  PRTG is monitoring PFSense , the core switch a Dell PowerConnect 2824 and my Vmware ESXi host. I have a laptop with PRTG on a static IP and output that to a large TV for attendees to see.

                  NeXusCorePRTG.JPG
                  NeXusCorePRTG.JPG_thumb
                  NexusPRTGSegment.JPG
                  NexusPRTGSegment.JPG_thumb

                  1 Reply Last reply Reply Quote 0
                  • S
                    sideout
                    last edited by

                    PFSense Queue view while LoL is going on so you can see traffic in the queue

                    NexusLANQueuesSundayLOLTourney.JPG
                    NexusLANQueuesSundayLOLTourney.JPG_thumb

                    1 Reply Last reply Reply Quote 0
                    • H
                      Harvy66
                      last edited by

                      Dropped ACKs. They're such small packets that they don't consume much bandwidth. Why not increase the queue size for those?

                      1 Reply Last reply Reply Quote 0
                      • S
                        sideout
                        last edited by

                        Mostly it was when I was playing around with a few things with M1 and D values on some queues. But it is something to consider.

                        1 Reply Last reply Reply Quote 0
                        • S
                          sideout
                          last edited by

                          Noticed some timeouts to Google DNS. Swapped in Level 3's . Here are the queues now.

                          NexuslanTrafficqueuesSundayafterdns.JPG
                          NexuslanTrafficqueuesSundayafterdns.JPG_thumb
                          Nexuslanprtglevel3dns.JPG
                          Nexuslanprtglevel3dns.JPG_thumb

                          1 Reply Last reply Reply Quote 0
                          • H
                            Harvy66
                            last edited by

                            Are you using DNS forwarding instead of Unbound? I rarely see timeouts to Google DNS, but I do sometimes see ping spikes, and not to both at the same time. So I many times have a ping -t running against 8.8.8.8 and 8.8.4.4, and I will see on start to lag, but not the other. Never for long and very rarely.

                            1 Reply Last reply Reply Quote 0
                            • S
                              sideout
                              last edited by

                              Yes I am using the DNS Forwarder in PFSense.  Here is my DNS Benchmark.

                              DNSBenchmark.JPG
                              DNSBenchmark.JPG_thumb

                              1 Reply Last reply Reply Quote 0
                              • S
                                sideout
                                last edited by

                                Final Post for now then will post end of LAN stats later.

                                ![NeXusLAN Traffic Totals Sunday.JPG](/public/imported_attachments/1/NeXusLAN Traffic Totals Sunday.JPG)
                                ![NeXusLAN Traffic Totals Sunday.JPG_thumb](/public/imported_attachments/1/NeXusLAN Traffic Totals Sunday.JPG_thumb)

                                1 Reply Last reply Reply Quote 0
                                • N
                                  Nullity
                                  last edited by

                                  @sideout:

                                  Mostly it was when I was playing around with a few things with M1 and D values on some queues. But it is something to consider.

                                  Any luck?

                                  Please correct any obvious misinformation in my posts.
                                  -Not a professional; an arrogant ignoramous.

                                  1 Reply Last reply Reply Quote 0
                                  • C
                                    cmb
                                    last edited by

                                    Good stuff, thanks for sharing.

                                    1 Reply Last reply Reply Quote 0
                                    • S
                                      sideout
                                      last edited by

                                      I set the values and i noticed alot more dropped packets when I did that. I changed it back.  Probably will need to do more testing at home.

                                      Also think I was running into some issue with the NIC's.  The server had Broadcomm NIC's and i put some customizations in the config file as per the PFSense docs wiki and I think I am going to go back to Intel NIC's .

                                      I have 2 4 port Intel NIC Server cards.  The old router had intel NIC's in it.  It seems that PFSense likes the Intel NIC better .

                                      Anyone have any issues with setting hardware offloading or ToE with NIC's?

                                      1 Reply Last reply Reply Quote 0
                                      • H
                                        Harvy66
                                        last edited by

                                        If I remember correctly, ToEs do not honor traffic shaping because the ToE is the one sending the packets, not the shaper. I could be wrong.

                                        1 Reply Last reply Reply Quote 0
                                        • S
                                          sideout
                                          last edited by

                                          Okay. I will have to test those settings as well.  I saw the other post about Codel with UDP and dropping packets. Maybe that was some of my issue I was having.

                                          Will have to test with putting UDP only queues under some other queueing and then using Codel for TCP only queues.

                                          There were some complaints of packet loss in some of the games using UDP solely

                                          1 Reply Last reply Reply Quote 0
                                          • First post
                                            Last post
                                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.