• Bandwidth limiter not opening microsoft

    2
    0 Votes
    2 Posts
    738 Views
    H

    Probably means you did something wrong. The only way to tell if you did something wrong is to see what you did. Please post your limiter setup.

  • Question regarding bufferbloat mitigation and lan-to-lan shaping

    8
    0 Votes
    8 Posts
    3k Views
    bradyrtechB

    Ok, so i think my entire hold-up was probably how i have a multi-LAN setup.  IF i set a traffic shaper on one of the LAN interfaces, with the goal being to throttle downstream internet traffic, it would also have the side effect of shaping any LAN to LAN traffic that passes through that interface (like from LAN wired to LAN wireless).    Traffic going from one host to another on the same LAN (two hosts on wired LAN, for example), since those hosts are connected off a switch,  and are in the same subnet, they aren't routing to a different subnet and their traffic isnt being throttled.

    this was probably my hold-up the entire time as I was testing from my wireless laptop to a wired server.

    If i put every thing on my LAN on the same subnet and turn on the shaping on the LAN and WAN interface, i'll get my expected throttling of internet traffic (Because i'm just dealing with a single WAN and single LAN interface).

    Anyways, i think my multi-LAN setup had me tripped up and i was missing the obvious.

    Thanks everyone for your responses and tips/tricks.

    I think i'll just set up a basic CODELQ shaper (unless there is a better scheduler to use) for WAN and one LAN and have all my hosts on the same LAN – then i'll get full gigabit between hosts on the LAN and throttled back internet from WAN <> LAN

  • Avoid Datacenter bandwidth overages

    8
    0 Votes
    8 Posts
    1k Views
    H

    50 is the default. I recommend just enabling CoDel on each queue. Large buffers are bad because they cause bufferbloat, but they're great for high throughput(except in extreme cases, like more than 1,000ms of bloat).

  • HFSC Shaping wizard: speed never reaching limits ("missing" bandwidth)?

    22
    0 Votes
    22 Posts
    4k Views
    N

    If you are interested in persuing a bug report I would see how other successful pfSense bug reports were conducted.
    https://redmine.pfsense.org

  • What am I doing wrong?

    13
    0 Votes
    13 Posts
    4k Views
    E

    What I do, which may not be what you do, and remains (to my testing) incompatible with transparent squid on the same box…

    Avoid wizard. Backup configuration before starting. Traffic shaper screw-ups can be epic and being able to back out and do over is a good plan. I've personally never had a good outcome from the wizard, YMMV.

    Traffic shaper, first tab "by interface" Wan (codelq, set nothing, it's codelq, nothing should need to be set) Lan (same.) Enable.

    Third tab, Limiter, create LanIn (this is what you think of as "out" to the world) and LanOut (this is what you think of as "in" from the world) set values for the traffic limits you want on the directions. You may tune these later on. These should be (or possibly become at the next step) yellow folder icons.

    Leave "mask" set to none here.

    With those created and enabled, select LanIn and add queue, which should be a white page icon. Under the the lanin queue I named it LanInQ) , select source addresses. Same with LanOut, create LanOutQ, Destination addresses.

    Change firewall rules, LAN, "advanced" "In/Out" to run traffic in LanInQ/LanOutQ.

    Lanin (traffic into LAN, out to world is pretty closely controlled (you actually have direct control here) LanOut is a bit less under your direct control, but the setting does have an influence.

    This specific setup is to divide the bandwidth among hosts "evenly" (only even if they all want more than they can have)  - you can also use other variations to provide pipes of a specific limited BW; I came down on the side of BW is wasted if not used, so if one hog gets it all when nobody else is using it, fine, but I needed to make sure that if 9 or 90 other folks showed up they would get a "fair" share as near as possible, and this mostly does that (far better than just capping everyone's BW, which means the hogs are on there longer hogging and nobody's speed is EVER good.)

    The limiter numbers do need to be less than the actual BW, but not by quite as much as you are proposing (90-95% is generally fine) - I look at what my "quality" figures (ping times) are running to adjust my tuning - if the limiter size is too large, the ping times go to heck in a handbasket.

    I played around with HFSC for quite a while before arriving here, and here does what I want much better, IME.

  • MOVED: Problem with web filtering

    Locked
    1
    0 Votes
    1 Posts
    493 Views
    No one has replied
  • How to shape IP to a slower speed after the IP has used 20G?

    6
    0 Votes
    6 Posts
    1k Views
    E

    The manual (could be automated) scheme I use (with the limiter) is to review use in bandwidthd and put the winners into a lower-priority queue - this is by writing IP addresses to an alias, and the LAN rules run the alias through the appropriate queue.

    For your scheme you could put them into a limiter queue that was speed-limited per pipe (rather than my scheme of a queue that has a lower priority, but no actual numerical limit, if nobody else is using BW.)

    I think the portal has a built in setup to simply cut them off after X amount (I have not used the portal myself)

  • Ports vs ip address on traffic shaping

    3
    0 Votes
    3 Posts
    957 Views
    S

    And if you want to test this - open a game on your PC - do a packet capture on it when your playing it on a Monday then wait till the weekend or the even the next day and do the same thing and compare the captures.

    See if they differ at all.

    Shaping gaming traffic is kind of like hunting land mines with a field knife - it is a slow and methodical process that requires patience because if you rush it - boom!!!  :)

  • Suggestion: sort the queues by priority

    6
    0 Votes
    6 Posts
    1k Views
    J

    Yes, I started using that queueing system (please help me God)… and priorities are not relevant anymore...

    Thanks for your explanations...

  • The topology of hierarchical queues can be surprisingly powerful.

    3
    0 Votes
    3 Posts
    1k Views
    H

    I went with this setup lastnight.

    20% for ACKs, but then qClassified is the golden ratio more than qUnclassified. qUnclassified is where my unclassified and P2P traffic goes, because P2P is so hard to classify. I have broken qUnclassified into to groups, UDP and the normal default queue, which will primarily be unclassified TCP. I have a floating rule at the very top to match all UDP traffic and place it in qUDP.

    The bandwidth is split 50/50 between qDefault and qUDP, but qUDP has a service curve that gives a 25% boost over 5ms. My connection is quite fast, so 5ms is a long time. Based on my limited understanding of service curves, m1 is the pseudo-bandwidth, for lack of a better term, and d is the target latency(must be a realistic value for your connection), and of course m2 is your actual real bandwidth.

    I don't want to get into the exactness of how service curve work, but the one example that I saw was a 64kb rate where they wanted to cut the jitter in 1/4, so they gave the queue an m1 of 128kb and m2 of 64kb, because they wanted a 64kb average but wanted the link to act like 128kb when it came to scheduling the packet. I saw "packet" and not "packets", because 64kb is such a slow rate and the size of the packets, that it worked out to take 10ms to transfer one packet. So they setup the curve to have a d(duration) of 20ms. The final result was 128kb 10 64kb m1/d/m2.

    The way I interpreted that one example, is if you have latency sensitive traffic there the intervals of traffic result in a burst relative to the provisioned sustained bandwidth, then you can set m1 and d such that if the queue is still within its average bandwidth, it can get a "burst"(term used very loosely), in order to allow the packets to get scheduled sooner than they would have otherwise for their sustained bandwidth. Of course in order to reduce the delay of one queue, you need to increase the delay of the other queues, not that I care for my "normal" traffic. The over all average bandwidth is still maintained, and the two queues will still have an average split of 50/50. This also implies that the "burst" is a debt to be repaid by consuming less bandwidth after the burst.

    ShaperHierarchy.png
    ShaperHierarchy.png_thumb

  • HSFC Traffic Shapping not Artifically Capping my Bandwidth

    11
    0 Votes
    11 Posts
    2k Views
    N

    Make sure your WAN interface is properly throttled. Take your real-world, average maximum throughput/goodput value and set the interface ~3-10% less than said value.

    Until your interface is properly rate-limited, HFSC (or any pfSense/ALTQ sched algo) has no queue to manipulate. The interface needs to receive traffic slightly faster than it can send, otherwise there's no traffic in the buffer/queue for HFSC to intelligently (re)schedule.

    PS - Are you resetting states?

  • Bufferbloat and the Interface is blank

    8
    0 Votes
    8 Posts
    2k Views
    KOMK

    Are both your NICs the same?  I've seen cases where you don't see a queue if its NIC is unsupported.

  • Traffic shaping LAN option not showing?

    8
    0 Votes
    8 Posts
    1k Views
    K

    I will tryout the dual port (82546) and keep you posted how it works  ;)

    Thanks KOM

  • Can i monitor traffic for Default queue?

    4
    0 Votes
    4 Posts
    868 Views
    N

    using tcpdump with the pflog interface might be what you want

    google for more info

  • Classifying Dropbox Traffic

    7
    0 Votes
    7 Posts
    3k Views
    KOMK

    There is no Dropbox option in the Shaping wizard.  As stated earlier, it's almost impossible.  They use HTTPS to Amazon EC2.  Good luck blocking it without potentially causing other problems.  The only way to do it would be to get your hands on a definitive list of netblocks used by Dropbox, if there is even such a static list.

  • Layer 7 High CPU?

    5
    0 Votes
    5 Posts
    1k Views
    K

    awww  :-[ thanks anyway

  • QoS/PRIQ - as of v2.3, what works what does not?

    8
    0 Votes
    8 Posts
    2k Views
    H

    This is how I understand queue assignment and PFSense. When a new connection is trying to be created, it must pass the firewall rules. There will be 2 states created and attached to the appropriate interfaces. At the time the states are created, they get assigned to their queues based on the rules, but only one rule gets to apply.

    Example, if I'm trying to connect out to Netflix and the new connection is initialized on my LAN interface, the rule on my LAN interface that passes the connection gets to assign the queue. If the queue is qNetflix, then the state-pair will both attempt to be assigned to qNetflix, but only if qNetflix exists on both interfaces. If my WAN interface does not have qNetflix defined, then it will get dropped in the default queue of the WAN interface, but state on the LAN interface will be placed in qNetflix.

    It's generally a good idea to declare the same named queues on all of your interested interfaces, otherwise one or both states may be placed in the default queue if the name does not exist.

    CBQ is roughly the same as HFSC at the abstract level, but HFSC decouples bandwidth and delay in more than one way from the old round-robin ways of CBQ. HFSC does not need to create an artificial backlog of packets nor does it add additional latency to packets in order to maintain proper bandwidth. On top of that, if you know what you're doing, you can decouple bandwidth and latency even further by using service curves. I will not be pretend to know exactly what is going on, but the gist seems to be you can make low bandwidth queue have the delay of a high bandwidth queue without giving it more bandwidth.

  • Dual Wan / 13VLan Bandwidth Limit

    14
    0 Votes
    14 Posts
    2k Views
    DerelictD

    The DHCP pass rules are hidden and are above that.

    Good luck.

  • Traffic Shaping Worse Than Baseline?

    23
    0 Votes
    23 Posts
    6k Views
    N

    @CaptainElmo:

    Is any part of the PRIQ queue processing offloaded in a manner which HFSC is not? Could there be a situation where I am hitting processing limits of an offloaded resources which are not reported as part of the main CPU statistics?

    The CPU needed for any sched algo will be minimal. Elegance and efficiency are perhaps more important than actual scheduling capability (Stochastic Fair Scheduling, for example). HFSC, perhaps the most complex and CPU intensive, was capable of 80,000+ packets per second on a 200Mhz Pentium Pro.

  • Ensure voip latency between 2 site with a dynamic link bandwidth

    3
    0 Votes
    3 Posts
    778 Views
    S

    HI Harvy66,

    thank you for your clear answer, I got the point (although I don't like it  :)).

    You are right, I can have the shaping less effective (or not at all) when bandwidth drops down, but unfortunately is in that moments I need it most, so I figured another possibility. As VOIP traffic flows in the Lan2Lan tunnel, may be I can:

    Office2 (bad internet) On the WAN: PRIO on ipsec traffic for Lan2Lan without specifying the maximun bandwidth available In Ipsec interface: PRIO on VOIP traffic, again without specifying the maximun bandwidth available Office1 (good internet) On the WAN: shape on ipsec traffic for Lan2Lan (HFSC or even simpler CBQ or CODEL or any of those mix) with bandwidth set to a reasonable value, let's say 20Mb or so On Ipsec interface: shape on the VOIP traffic (HFSC or…) with a bandwidth set to sustain a bunch of concurrent calls, let say 512Kb or so

    Actually, the Lan2Lan tunnel is openvpn and serves traffic other than VOIP (smb, http, ssh...). May be is better to setup another Lan2Lan Ipsec just for VOIP (instead to substitute the openvpn one) to better try to guarantee low latency to VOIP (that is my only requirement at moment) with PRIO/shaping above.

    In my mind, this should at least help VOIP latency when bandwidth at Office2 falls down (PRIO), and the shaping on Office1 should help with queue starvation PRIO introduces when bandwidth at Office2 is not (too much) oversubscribed.

    Does this make sense to you?

    Thank you very much!

    Ciao,
    S.

Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.