WIP - A showcase of HFSC's ungodly, uncoupled capabilities.



  • **Edit:**I was lazy and did not compare HFSC to other queueing algorithms in pfSense and my NTP graphs are fluctuating randomly so I suspect that something is awry with NTP or my pfSense configuration. This is a poor quality post in my opinion…
    If you want more HFSC information, please check out https://forum.pfsense.org/index.php?topic=89367.0

    Howdy.
    Here are some RRD graphs showing the effects of putting pfSense's (previously unqueued) outgoing NTPd packets into a real-time, Concave (burst) Service Curve queue.

    The child queue (qNTP) is setup with a real-time 350Kb burst for 5ms then 25Kb. No link-share or upper-limit. I am synchronizing with 3 external NTP servers. The NTP packets are ~616bits a piece, iirc. (616bits would take 1.76ms to dequeue at 350Kb/s)
    I have 2 other child queues; qACK with a link-share of 200Kb and qBulk link-share set at 50Kb with a (probably unneeded?) Convex Service Curve of 0Kb for 10ms.

    The parent queue has an upper-limit of 600Kb.

    I queue virtually all tcp "passing" in on LAN into the qBulk/qACK and I have 1 single floating rule "matching" the NTP UDP packets as they leave (outgoing) WAN.

    I am happy to see these results, but I think they are TOO good, if I'm honest. :
    My current philosophy is that real-time is WAYYYY overused. A few of the texts I have read have plainly stated that real-time is needed by virtually only NTP and VOIP. HFSC's link-share is supposed to be better than all preceeding link-sharing algorithms, so you're already ahead when you use link-share. It makes no sense to use up all 100% of real-times 85% share of your throughput, but I've seen it in plenty of established texts.

    Anyway, what's up? :D Let's get some dialog going about HFSC. It really needs to be demistified. Sometimes I feel like I have a grasp of HFSC, then I slowly remember all the many, MANY other sections I couldn't comprehend, and I remember I've got a looong way to go. Should be fun though. :)
    There's like 8 people who understand it, 20 who pretend to, and few floating balls of energy who spontaneously transcended life as we know it when they fully comprehended HFSC in it's decoupled glory.

    EDIT: I am running pfSense 2.2-release NanoBSD in a old 32bit 2Ghz HP Desktop with 2Gb DDR ram running at 1000Hz. (NanoBSD defaults to 100Hz which is probably too coarse for HFSC.)





  • How do I get that NTPD graph? The jitter listed in the NTPD status shows about 0.1-0.3ms across the board.

    This is how I view realtime. I use it for traffic that I want low jitter. As a rule of thumb, something is at "100%" when it's at 80%. Your network link will start to show signs of latency and jitter once you get past 80% and it ramps up quickly.

    Link share allows you to use up to 100% of your bandwidth, which is a good thing because who wants to only get 80% of what they're paying for? But real time allows you to not experience jitter or latency of a 100% saturated link. I use real time for any low bandwidth low latency flows because those flows are easy to gauge bandwidth needs. All other traffic gets link share.

    Nutshell
    Realtime bandwidth is jitter free and latency free for dequeinug purposes
    Linkshare bandwidth will start to experience dequeinug latency and jitter once the link gets past 80% usage.

    Not to say that Linkshare will have bad latency or jitter, it just can't have any guarantees about the dequeinug latency. All of these latencies are relative to the scheduling quantum. The scheduler cannot guarantee latencies lower than the time slices that it uses for monitoring scheduling.



  • @Harvy66:

    How do I get that NTPD graph? The jitter listed in the NTPD status shows about 0.1-0.3ms across the board.

    Services->NTP->"NTP graphs" checkbox

    Status->NTP should also be useful.

    I am showing 0.119ms, 0.153ms, 0.143ms across my 3 NTP syncs. My ping to first gateway is ~9ms.



  • @Harvy66:

    How do I get that NTPD graph? The jitter listed in the NTPD status shows about 0.1-0.3ms across the board.

    This is how I view realtime. I use it for traffic that I want low jitter. As a rule of thumb, something is at "100%" when it's at 80%. Your network link will start to show signs of latency and jitter once you get past 80% and it ramps up quickly.

    Link share allows you to use up to 100% of your bandwidth, which is a good thing because who wants to only get 80% of what they're paying for? But real time allows you to not experience jitter or latency of a 100% saturated link. I use real time for any low bandwidth low latency flows because those flows are easy to gauge bandwidth needs. All other traffic gets link share.

    Nutshell
    Realtime bandwidth is jitter free and latency free for dequeinug purposes
    Linkshare bandwidth will start to experience dequeinug latency and jitter once the link gets past 80% usage.

    Not to say that Linkshare will have bad latency or jitter, it just can't have any guarantees about the dequeinug latency. All of these latencies are relative to the scheduling quantum. The scheduler cannot guarantee latencies lower than the time slices that it uses for monitoring scheduling.

    I think it's important to differentiate between jittering caused by the limits of your connection and the limits of HFSC. HFSC makes guarantees and proves why… we all know about the "guarantees" that our ISP's make us; "Up to 6mbit". :|

    Real-time, as I currently understand it, is only a drastic benefit if you take a stream with a low-average bandwidth (VOIP/NTP), and give it a burst-rate that is at least 10x faster than the average bandwidth. Then your latency improves hugely.

    With link-share and Concave Service Curves I suppose that choosing something low-bandwidth interactive might be a good choice like SSH or telnet. Actually, that reminds me, how could you differentiate between an FTP/SSH connection that is downloading, and one that is being used as an interactive terminal?



  • Another way to state it. On average, buffers are usually full or empty at the really short time scales. Limiting real time to 80% effectively keeps the real time back-log at 0. Link share, because it can get past 80%, can have a back-log. Real time is always taken before link share, so it's pretty much a way to make sure your real time traffic has nearly "zero" jitter.

    I'm lucky because my ISP has all symmetrical bandwidth and uses the term "dedicated" in lieu of "up to". I'm not sure of their exact implementation, but I was told they make a best effort to make sure no customers experience any ingestion in their network at any point or on their trunk. I was told their trunk is sized to 2x peak usage. They use level 3 as their trunk, and according to Level 3 blogs, 50% utilization is "100%" and it's time for an upgrade. According to Level 3, you should not experience internal congestion on their networks as well.

    Between my ISP claiming "no congestion" and Level 3 claiming "no congestion", the only congestion I really experience is once Level 3 hands off to someone else.

    edit:
    Congestion past 80% is mostly an issue of multiple flows interacting. you can easily get a single flow near 100% without buffering issues, but good luck getting 2+ flows from not stepping on each other's feet. Because the firewall is doing the traffic shaping, while 80% is the maximum for keeping real time from getting jitter, the actual output from the firewall can be at something like 95% of the WAN rate. This is because the firewall is acting as the single flow and keeping a nice smooth flow of packets.



  • I was surprised when I read that the Internet has no use for QoS because, like you said, it should never be fully saturated.

    Hey, do you have that NTPd graph yet so I can have something to compare to? I just can't imagine that HFSC alone caused that drastic of a latency improvement. (Though, the curve is set to a max delay of 5ms, and the highest I've seen it jitter was ~4.5ms, so it sure does seem like it's doing exactly what I configured it to do…)



  • QoS is just congestion management    ;D

    I'll enable that graph when I got home tonight. I could probably have a bit of something by tomorrow, but it'll be at least 7 days until I have 7 days of data.. :-)



  • @Harvy66:

    QoS is just congestion management    ;D

    I'll enable that graph when I got home tonight. I could probably have a bit of something by tomorrow, but it'll be at least 7 days until I have 7 days of data.. :-)

    Multiplex it and run parallel streams.

    Just don't cross the streams. ;)



  • I got a bit of NTPD info


Log in to reply