Auto Sense WAN Connection Speed?



  • Is there a way to automatically sense the WAN connection speed - and have PFSense use that speed in its traffic shaping?

    I've run into a situation where my ISP throttles users at peak times, so the actual speed drops between 8pm and 12noon - ideally PFSense can run the speedtest each hour and adjust accordingly.

    Can this be done?



  • No.  If your ISP starts to throttle you then you're going to have to use 90% of your lowest throttled speed for the shaper to be effective.  if this is your scenario, I might suggest using PRIQ instead of HFSC.  PRIQ doesn't really care about bandwidth as it uses a priority queueing system instead of bandwidth allocation.  I think it might work better for your situation.



  • HFSC's link-sharing doesn't care about exact bandwidth either, only about the relationships between the allocated throughputs. So, give qACK "500Kbit/s" and give qBulk "100Kbit/s" and HFSC will keep that ratio consistent, regardless of available bandwidth.

    It's like saying qACK will get 5 bytes/bits/nibbles for every 1 byte/bit/nibble that qBulk gets. From 10Kb bandwidth to 40Gbit, the ratio stays the same.

    Disclaimer: I only pretend to know wtf I am talking about when it comes to HFSC.  (Like everyone else.) ;)
    Please verify my assumptions.



  • From what I understand, HFSC still needs to know your upper limit so that it can limit the rate properly.  HFSC looks cool and sounds really sexy, but it's a real pain for me to wrap my head around.  I switched from HFSC to PRIQ and never noticed a performance difference, but my config is a lot simpler now.



  • @KOM:

    From what I understand, HFSC still needs to know your upper limit so that it can limit the rate properly.  HFSC looks cool and sounds really sexy, but it's a real pain for me to wrap my head around.  I switched from HFSC to PRIQ and never noticed a performance difference, but my config is a lot simpler now.

    If you're fine with saturating your link (or perhaps maybe check the CODELQ box?), you can simply not enter a bandwidth into the root queue/interface. This will leave that queue/interface with no upper-limit. Then put in the bandwidth/link-share into the child queues and you'll be limitless.



  • Bandwidth management of any kind cannot work without settings your bandwidth, otherwise the manager will assume you have free bandwidth and just forward packets without delay. The whole point of QoS is deciding what to do with a limited resource. Accurately defining that limit is key for QoS to work correctly, unless there is some side channel way for the shaper to know how much bandwidth is currently available.



  • @Harvy66:

    Bandwidth management of any kind cannot work without settings your bandwidth, otherwise the manager will assume you have free bandwidth and just forward packets without delay. The whole point of QoS is deciding what to do with a limited resource. Accurately defining that limit is key for QoS to work correctly, unless there is some side channel way for the shaper to know how much bandwidth is currently available.

    The whole point of the link-sharing algorithm in HFSC is the "fair distribution of excess bandwidth" (as stated in the conclusion of the HFSC paper) http://www.ecse.rpi.edu/homepages/koushik/shivkuma-teaching/sp2003/case/CaseStudy/stoica-hfsc-ton00.pdf

    Whether the excess bandwidth has a limit, artificial or otherwise, is another matter entirely.
    You CAN create an HFSC queue that uses the link-sharing criteria exclusively and has no artificial limits.

    As far as I know, what I have stated is factually correct. If it's not, please give me some proof and I will edit my posts. There is far too much misinformation about HFSC already. I would hate to contribute. :)

    Edit: Blah. I think I was being a bit of a jerk there. Let me re-write my post more compassionately.

    To be frank, your explanation of Bandwidth Management is meant to achieve is overly restrictive. Some people want to achieve maximum bandwidth at the cost of poor latency, while someone else may achieve persistent low-latency by artificially limiting their downloads. I'm sure there's tons of other goals with tons of different ways of achieving those goals.

    I understand your point, but even with an uncapped connection, there are codelq, sfq/fairq, red, rio, etc which can all achieve slightly different improvements on a saturated link.. Even without those things, HFSC can still guarantee the latency and/or throughput of the real-time criteria while the link-share criteria is fairly saturating the link.

    I am still a newbie with HFSC and QoS/traffic-shaping in general, but I'll give any help I can.
    It would be great if we could get some simplified documentation out there. the linux "tc-hfsc" manpage is pretty damn good.
    I can offer no help with the math underlying HFSC. It's like my GED is worthless. :\

    :D

    Anyway.
    @tuffcalc, depending on what you're goals are (upload and/or download shaping), CODELQ or FAIRQ might be exactly what you are looking for. pfSense's CODELQ implementation is meant to maximize throughput while keeping buffers/queues from filling up, which keeps latencies optimal. FAIRQ is meant to share bandwidth  fairly among simultaneous downloads/uploads.

    Can you give us some more details on what you're trying to accomplish?



  • @Nullity:

    @Harvy66:

    Bandwidth management of any kind cannot work without settings your bandwidth, otherwise the manager will assume you have free bandwidth and just forward packets without delay. The whole point of QoS is deciding what to do with a limited resource. Accurately defining that limit is key for QoS to work correctly, unless there is some side channel way for the shaper to know how much bandwidth is currently available.

    The whole point of the link-sharing algorithm in HFSC is the "fair distribution of excess bandwidth" (as stated in the conclusion of the HFSC paper) http://www.ecse.rpi.edu/homepages/koushik/shivkuma-teaching/sp2003/case/CaseStudy/stoica-hfsc-ton00.pdf

    Whether the excess bandwidth has a limit, artificial or otherwise, is another matter entirely.
    You CAN create an HFSC queue that uses the link-sharing criteria exclusively and has no artificial limits.

    As far as I know, what I have stated is factually correct. If it's not, please give me some proof and I will edit my posts. There is far too much misinformation about HFSC already. I would hate to contribute. :)

    Edit: Blah. I think I was being a bit of a jerk there. Let me re-write my post more compassionately.

    To be frank, your explanation of Bandwidth Management is meant to achieve is overly restrictive. Some people want to achieve maximum bandwidth at the cost of poor latency, while someone else may achieve persistent low-latency by artificially limiting their downloads. I'm sure there's tons of other goals with tons of different ways of achieving those goals.

    I understand your point, but even with an uncapped connection, there are codelq, sfq/fairq, red, rio, etc which can all achieve slightly different improvements on a saturated link.. Even without those things, HFSC can still guarantee the latency and/or throughput of the real-time criteria while the link-share criteria is fairly saturating the link.

    I am still a newbie with HFSC and QoS/traffic-shaping in general, but I'll give any help I can.
    It would be great if we could get some simplified documentation out there. the linux "tc-hfsc" manpage is pretty damn good.
    I can offer no help with the math underlying HFSC. It's like my GED is worthless. :\

    :D

    Anyway.
    @tuffcalc, depending on what you're goals are (upload and/or download shaping), CODELQ or FAIRQ might be exactly what you are looking for. pfSense's CODELQ implementation is meant to maximize throughput while keeping buffers/queues from filling up, which keeps latencies optimal. FAIRQ is meant to share bandwidth  fairly among simultaneous downloads/uploads.

    Can you give us some more details on what you're trying to accomplish?

    What you said is "true" but the context is a bit off. HFSC's definition of "free" bandwidth is based on the registered bandwidth set on the interface. If your interface says 1Gb, but your internet connection is only 50Mb, HFSC will say "sure, I have free bandwidth, I have 1Gb of bandwidth, send that 100Mb flow right on through, don't slow down". All you've done at this point is make HFSC effectively not do any restrictions and just hand off 100Mb/s of data to your 50Mb/s connection, which then starts dropping packets once the buffer is full.

    You cannot QoS traffic if your buffer is empty. Many hardware implementations of QoS don't even enable QoS until the buffer is at least 80% full. If you don't restrict your interface, you're just telling it to forward at line rate and your buffer will never have anything in it because it will forward at 1Gb/s.

    The whole issue is that sending packets is an non-blocking asynchronous operations relative to the network. I can send all of the packets I want, but eventually there will be a choke point. The whole point of traffic shaping is to make your firewall the choke point so your firewall can shape the congestion. If the congestion happens in someone else's network, you can't shape the congestion in their network.



  • @Harvy66:

    @Nullity:

    @Harvy66:

    Bandwidth management of any kind cannot work without settings your bandwidth, otherwise the manager will assume you have free bandwidth and just forward packets without delay. The whole point of QoS is deciding what to do with a limited resource. Accurately defining that limit is key for QoS to work correctly, unless there is some side channel way for the shaper to know how much bandwidth is currently available.

    The whole point of the link-sharing algorithm in HFSC is the "fair distribution of excess bandwidth" (as stated in the conclusion of the HFSC paper) http://www.ecse.rpi.edu/homepages/koushik/shivkuma-teaching/sp2003/case/CaseStudy/stoica-hfsc-ton00.pdf

    Whether the excess bandwidth has a limit, artificial or otherwise, is another matter entirely.
    You CAN create an HFSC queue that uses the link-sharing criteria exclusively and has no artificial limits.

    As far as I know, what I have stated is factually correct. If it's not, please give me some proof and I will edit my posts. There is far too much misinformation about HFSC already. I would hate to contribute. :)

    Edit: Blah. I think I was being a bit of a jerk there. Let me re-write my post more compassionately.

    To be frank, your explanation of Bandwidth Management is meant to achieve is overly restrictive. Some people want to achieve maximum bandwidth at the cost of poor latency, while someone else may achieve persistent low-latency by artificially limiting their downloads. I'm sure there's tons of other goals with tons of different ways of achieving those goals.

    I understand your point, but even with an uncapped connection, there are codelq, sfq/fairq, red, rio, etc which can all achieve slightly different improvements on a saturated link.. Even without those things, HFSC can still guarantee the latency and/or throughput of the real-time criteria while the link-share criteria is fairly saturating the link.

    I am still a newbie with HFSC and QoS/traffic-shaping in general, but I'll give any help I can.
    It would be great if we could get some simplified documentation out there. the linux "tc-hfsc" manpage is pretty damn good.
    I can offer no help with the math underlying HFSC. It's like my GED is worthless. :\

    :D

    Anyway.
    @tuffcalc, depending on what you're goals are (upload and/or download shaping), CODELQ or FAIRQ might be exactly what you are looking for. pfSense's CODELQ implementation is meant to maximize throughput while keeping buffers/queues from filling up, which keeps latencies optimal. FAIRQ is meant to share bandwidth  fairly among simultaneous downloads/uploads.

    Can you give us some more details on what you're trying to accomplish?

    What you said is "true" but the context is a bit off. HFSC's definition of "free" bandwidth is based on the registered bandwidth set on the interface. If your interface says 1Gb, but your internet connection is only 50Mb, HFSC will say "sure, I have free bandwidth, I have 1Gb of bandwidth, send that 100Mb flow right on through, don't slow down". All you've done at this point is make HFSC effectively not do any restrictions and just hand off 100Mb/s of data to your 50Mb/s connection, which then starts dropping packets once the buffer is full.

    You cannot QoS traffic if your buffer is empty. Many ha rdware implementations of QoS don't even enable QoS until the buffer is at least 80% full. If you don't restrict your interface, you're just telling it to forward at line rate and your buffer will never have anything in it because it will forward at 1Gb/s.

    The whole issue is that sending packets is an non-blocking asynchronous operations relative to the network. I can send all of the packets I want, but eventually there will be a choke point. The whole point of traffic shaping is to make your firewall the choke point so your firewall can shape the congestion. If the congestion happens in someone else's network, you can't shape the congestion in their network.

    Regarding HFSC's definition of "free" bandwidth, here is a quote from tc-hfsc's manpage.

    LS criterion's task is to distribute bandwidth according to specified
    class hierarchy. Contrary to RT criterion, there're no comparisons
    between current real time and virtual time - the decision is based
    solely on direct comparison of virtual times of all active subclasses

    • the one with the smallest vt wins and gets scheduled. One immediate
      conclusion from this fact is that absolute values don't matter - only
      ratios between them (so for example, two children classes with simple
      linear 1Mbit service curves will get the same treatment from LS
      criterion's perspective, as if they were 5Mbit
      ).

    Being the choke point is about latency afaik, and latency is just 1 part of the QoS/traffic-shaping spectrum. You can shape for maximum bandwidth, minimum bandwidth, stream fairness, host fairness, and many other types of cases that neither you or I have experienced.



  • @Nullity:

    Being the choke point is about latency afaik, and latency is just 1 part of the QoS/traffic-shaping spectrum. You can shape for maximum bandwidth, minimum bandwidth, stream fairness, host fairness, and many other types of cases that neither you or I have experienced.

    Ehhhh,  yes and no. Additional latency and jitter is caused by buffering, buffering is caused by busty traffic. The point of traffic shaping is to massage the bursts to force them to smooth out. The biggest issue is that bandwidth usage is never stable on human driven usage. You can still maintain minimum latency and maximum bandwidth, but your need to temporarily increase the latency or drop packets for flows that are bursting.

    CoDel and HFSC kind of do the same thing, they manage buffering. CoDel does it in a way that stateless and HFSC is stateful.

    All forms of QoS, traffic shaping, and AQMs are just buffer managers. In order to manage a buffer, you need a back log, the only way to get a back log is to fill up an interface faster than it's draining. Let me tell you, if you have a 1Gb LAN and a 1Gb WAN, your firewall is almost always going to be draining faster than it's filling up, draining right into your ISP, which will quickly fill up and back-log unless  you have a  1Gb connection to your ISP.

    In order for QoS, traffic shaping, and AQMs to be useful, you need to limit your interface rates to match your real bandwidth. If you have dedicated bandwidth, this is simple, but if you have fluctuating bandwidth, you need a way to detect the buffering that is happening on an interface that is not under your control, then feed that back into your system.

    CoDel on a cable modem is useful because cable modems pretty much just use time division mutiplexing to share bandwidth. The cable modem has knowledge of when it gets to drain its buffer. Your firewall does not have knowledge of this information and will send data as fast as you let it, not as fast as your cable modem will let it.



  • I just realized something, I forgot about pause frames. The modem can tell your WAN NIC to back off, which will give time for packets to buffer in your firewall. This will give some decent benefit, but it will not stop the buffer bloat issue.

    Personally, I disabled pause frames because of these issues they can cause, but they're fine for point-to-point interfaces, like your WAN into your modem.

    In my case, pause frames makes pretty much no difference because my ISP recently changed our ONTs to run at full 1Gb, then they traffic shape upstream. I used to get a hard stop at my max rate, but now it has a slight burst to it. Unless I attempt to transfer 1Gb/s, I won't get pause frames.