Changes in Shaping for LAN Parties - Multiple Cable Modem's



  • So after using HFSC , and with the re emergence of TCP being used for games now and not strictly UDP for game client traffic , I have shifted from using shaping to limiting.  Here is what I have done:

    1. Define Alias's for DHCP Pools where I put groups of even and odd IP's in an Alias group , ie ModemPoolGroup1
    2. Use LAN Firewall rules for TCP and UDP with the specific Alias's sending traffic out a specific modem.
    3. Defining a global limiter for TCP and UDP and applying to the above firewall rules. I set TCP at 1MB down and 1.5MB UP / UDP at 2MB and 2MB
    4. I would put about 50 to 60 IP's per modem pool.

    Additionally for streaming I would use a 4th modem for that along with the LAN cache server.

    The downside to this is obviously if a modem goes down then you are dumping more traffic on the remaining modems.  The upsides to this are:

    1. Torrents are limited now to a specific rate and will not affect everyone , only the modem group they are on.  I found that Torrents are now using UDP as well so just limiting TCP was not working.

    2. You don't have to worry about some games working funky with Multi-WAN load balancing where it would send game packet out WAN1 and send PunkBuster or anti-cheat client out WAN2 and drop the session because of it.

    Some other inherent issues are that you are limited to 4 WAN's on one PFSense box.  Even though you can add more interfaces , you can only set up  4 DNS entries under System and  General so that is a hard limit. (Unless there is something I am missing there)

    Ran this at 2 LAN parties and seem like it better overall.  It let people download stuff they wanted but kept them capped if not on the cache.  I didn't really have to worry about Torrenters as they hit the limiter as well.

    If anyone is interested in the config let me know and I will post it up for people to use.  Obviously you would need to either change your IP scheme or you what I am using for it to work without adjustments.

    I was able to pretty much keep the load balanced across all 3 modems for the whole event.



  • If anyone is interested in the config let me know and I will post it up for people to use.

    Please do, preferably with screens so that dummies like me follow along.



  • Can you think of any other disadvantages with the switch from HFSC/ALTQ to limiters/dummynet? Does the lack of codel impact latency any?

    Hopefully FreeBSD's dummynet implementation of fq_codel will appear in pfSense soon, making limiters more capable.



  • Every time I would try and use Codel on HFSC queues it would go south fast at the LAN Parties.  Maybe that was just a product of the environment , maybe not.  If I had a single big connection then it might not  be an issue.  Unfortunately I cant get lucky and score a 500MB / 500MB fiber like some LAN Parties in the area do.



  • What kind of issues did you get with Codel and what kind of bandwidth? Codel really hate less than 1Mb/s and prefers at least 10Mb/s. If you had 50Mb/s and 10 HFSC queues, it's possible that large swings in bandwidth where a Codel queue may go from the full 50Mb/s one moment, down to 5Mb the next, could cause Codel to start dropping a lot of packets attempting to fight the bloat.

    In my simple at home setup with only a few queues, Codel seems to work every well with lots of flows for all but the most extreme situations. I was able to artificially create an issue with the DSLReport's speedtest by setting HTTP to have 80% bandwidth, started several highly seeded torrent, then suddenly updated HFSC to give HTTP 1% and torrent 99%. With so many connections established, the "slow start" exponentially scaling bandwidth discovery per connection very quickly monopolized the bandwidth. This created an artificial in-rush.

    In short, Codel does not like sudden large swings in available bandwidth.



  • Which would be the case at a large LAN Party.  Typically I have 4 queues defined - P2P (default) ACK , Games , HTTP on the LAN and then Default (default) on the WAN.

    But at LAN's you will see wide swings in bandwidth usage for sure.

    I was seeing where games would get a massive lag / latency spike then follow that up with packet loss that is seen in game as rubber banding.  Havent went back to using it as what I am doing now is working the best it seems.



  • Were many of the games UDP-based?



  • @sideout:

    Which would be the case at a large LAN Party.  Typically I have 4 queues defined - P2P (default) ACK , Games , HTTP on the LAN and then Default (default) on the WAN.

    But at LAN's you will see wide swings in bandwidth usage for sure.

    I was seeing where games would get a massive lag / latency spike then follow that up with packet loss that is seen in game as rubber banding.  Havent went back to using it as what I am doing now is working the best it seems.

    If the case was a lack of bandwidth, then the Games queue needs more bandwidth assigned. HFSC is about assigning minimum bandwidths. At home, I give P2P(default) 20%, Normal traffic 40%, high priority 40%. Games don't need a lot of bandwidth, typically less than 64Kb/s per user unless you're on some high update rate FPS game. Some games you can play over 14.4k dialup as long as no VoIP.

    100 players all using 64kbit would be 6.4Mb/s. 80% is 100%, so add 25% to 6.4Mb and you get 8Mb/s.



  • @Harvy66:

    If the case was a lack of bandwidth, then the Games queue needs more bandwidth assigned. HFSC is about assigning minimum bandwidths. At home, I give P2P(default) 20%, Normal traffic 40%, high priority 40%. Games don't need a lot of bandwidth, typically less than 64Kb/s per user unless you're on some high update rate FPS game. Some games you can play over 14.4k dialup as long as no VoIP.

    100 players all using 64kbit would be 6.4Mb/s. 80% is 100%, so add 25% to 6.4Mb and you get 8Mb/s.

    With HFSC you can also do the opposite and assign worst-case latency maximums by assigning 0 to the m1 and the maximum latency to d of link-share's (or real-time?) parameters. This might be best used when you have many latency-sensitive queues and only a few latency-insensitive queues (where you'd employ the method).



  • @Harvy66:

    @sideout:

    Which would be the case at a large LAN Party.  Typically I have 4 queues defined - P2P (default) ACK , Games , HTTP on the LAN and then Default (default) on the WAN.

    But at LAN's you will see wide swings in bandwidth usage for sure.

    I was seeing where games would get a massive lag / latency spike then follow that up with packet loss that is seen in game as rubber banding.  Havent went back to using it as what I am doing now is working the best it seems.

    If the case was a lack of bandwidth, then the Games queue needs more bandwidth assigned. HFSC is about assigning minimum bandwidths. At home, I give P2P(default) 20%, Normal traffic 40%, high priority 40%. Games don't need a lot of bandwidth, typically less than 64Kb/s per user unless you're on some high update rate FPS game. Some games you can play over 14.4k dialup as long as no VoIP.

    100 players all using 64kbit would be 6.4Mb/s. 80% is 100%, so add 25% to 6.4Mb and you get 8Mb/s.

    Yes most games are UDP based.  I had the queues set at 20 ACK / 35 Games / 35 HTTP / 10 P2P



  • @Nullity:

    @Harvy66:

    If the case was a lack of bandwidth, then the Games queue needs more bandwidth assigned. HFSC is about assigning minimum bandwidths. At home, I give P2P(default) 20%, Normal traffic 40%, high priority 40%. Games don't need a lot of bandwidth, typically less than 64Kb/s per user unless you're on some high update rate FPS game. Some games you can play over 14.4k dialup as long as no VoIP.

    100 players all using 64kbit would be 6.4Mb/s. 80% is 100%, so add 25% to 6.4Mb and you get 8Mb/s.

    With HFSC you can also do the opposite and assign worst-case latency maximums by assigning 0 to the m1 and the maximum latency to d of link-share's (or real-time?) parameters. This might be best used when you have many latency-sensitive queues and only a few latency-insensitive queues (where you'd employ the method).

    The advanced configurations of HFSC are a bit more than I feel confident trying to use without having some way to actually measure the differences caused by the settings.

    @sideout:

    @Harvy66:

    @sideout:

    Which would be the case at a large LAN Party.  Typically I have 4 queues defined - P2P (default) ACK , Games , HTTP on the LAN and then Default (default) on the WAN.

    But at LAN's you will see wide swings in bandwidth usage for sure.

    I was seeing where games would get a massive lag / latency spike then follow that up with packet loss that is seen in game as rubber banding.  Havent went back to using it as what I am doing now is working the best it seems.

    If the case was a lack of bandwidth, then the Games queue needs more bandwidth assigned. HFSC is about assigning minimum bandwidths. At home, I give P2P(default) 20%, Normal traffic 40%, high priority 40%. Games don't need a lot of bandwidth, typically less than 64Kb/s per user unless you're on some high update rate FPS game. Some games you can play over 14.4k dialup as long as no VoIP.

    100 players all using 64kbit would be 6.4Mb/s. 80% is 100%, so add 25% to 6.4Mb and you get 8Mb/s.

    Yes most games are UDP based.  I had the queues set at 20 ACK / 35 Games / 35 HTTP / 10 P2P

    Is that in mbits? Does that also apply to the upload bandwidth?



  • No that is percents.  I set QInternet at 180Mbits  = 3 modems X 60Mbits a modem for down and set the WAN interface to 6Mbits on each WAN.



  • Here are the configs for you guys - there are several folders in the Google drive so find the one you want.  I included a zipped file and raw files as well.  The username and password for the VM are the defaults.  Here are some notes

    1. I am using a 172.22.0.1/22 IP schema.  If you dont like that then you will need to change it.
    2. I have 3 modems in the 4 WAN for DHCP and 1 for static for using a cache server or for streamers. 
    3. Top rule under LAN firewall rules is disabled. Enable this for using a LAN cache server as it blocks DNS except for that group.
    4. I split the DHCP pools up in groups for a 250 person LAN Party.  You can shorten it or leave it as you see fit.
    5. I used Google DNS and OpenDNS for DNS and used Level3 for gateway monitoring.

    To use the configs:
    1. Download them and extract if you downloaded the zip.
    2. Import into Vmware.  I did these on workstation but should import fine into Esxi free version.
    3. You might have to change the type of NIC's.  Some of these are VMX and others are EM nics. change all the VMX if you want. I usually do just didnt do it this time as this is a lab machine I am using to config it.
    4. Remember - it is admin and pfsense for password!!!!

    The raw xml files are there too so just download those and restore from the GUI to use , same bullet points above apply so backup your config before applying mine.

    Here is the link:  https://drive.google.com/drive/folders/0B96G4GloGCiKRklTaE83SU9nY0E?usp=sharing
    Attached are some pics of the firewall and DHCP modem group configs as well plus there are xml files in the Google drive too.










  • 35% of 6Mb is only 2.1Mb/s. Your high latency and Codel may be coming from not enough bandwidth, especially if some large subset of those games are UDP, where they do not respond to dropped packets.

    When I said this

    100 players all using 64kbit would be 6.4Mb/s. 80% is 100%, so add 25% to 6.4Mb and you get 8Mb/s.

    That is for both up and down. Game traffic tends to be very symmetrical. I doubt all of your users are using the Internet at the exact same time, but you're up against your limit for upload. Give your games more bandwidth on your WAN, probably nearly all of the bandwidth. You may even want a separate ACK queue for your games so anyone downloading doesn't saturate your already starved egress ACK queue with bulk download ACKs. Actually, don't even assign traffic in your P2P or HTTP to use the ACK queue for their ACKs.

    Downloading at 60Mb/s is going to use about 2Mb/s of ACKs, leaving you with only 4Mb of spare bandwidth for other stuff, which is not even enough for your games.



  • I use different ratios on the WAN.  You have to remember as well we are talking about 3 WAN's not one so that is 18Mbit of upload.

    Plus you dont assign an ACK queue for UDP traffic anyways.  The majority of the TCP gaming traffic is lobbying and matchmaking but I can see about trying a separate qGAMESACK for the gaming traffic.

    One reason I never did combined TCP/UDP rules and always made separate rules for each protocol type was it seemed that PFSense did a better job at getting the traffic into the right buckets when i did not use combo rules.  That might have changed havent really tested it yet.

    Either way with HFSC , a person running a torrent client can ruin the day and with no real concrete way to block them with Pfsense it seems easier overall for LAN Party application to apply a limiter globally and divide traffic up per modem since that is the real bottleneck rather than try and doe something fancy to limit them.



  • @sideout:

    I use different ratios on the WAN.  You have to remember as well we are talking about 3 WAN's not one so that is 18Mbit of upload.

    If you assume my recommendation of 64kb/s per player just to game, that's 16Mb/s of bandwidth right there. You pretty much have zero headroom on your upload. You'll have to assign about 90% of your bandwidth to the Game queue.



  • It was never an issue except when I turned on Codel on the queues.  If I left it off , it ran fine.  It was mainly the torrenting that caused me to change tactics.

    Since you can't really block them the easiest and fastest fix is to limit them.