Need help with setup for 1Gb / 500u LAN-party



  • Hi!

    I'm hosting a LAN-party with about 500-600 clients. We got the edge and a higher end core switch ready and run a flat network. For internet connectivity we got a 1Gb link that we've routed through a Checkpoint server earlier, but it lacks support for uPnp so console owners have a hard time hosting and connecting. We'd like to try the jump to pfSense this year.

    Can a single box run dhcpd and do the routing to that amount of clients? Not sure about IP-range yet but I assume we can get around 100-200 public ipv4 IPs. Is there a way to distribute the NATing over many WAN ips? What about ipv6, can we do nating from ipv4 local to ipv6 WAN while still leaving the local network flat?

    Also, what about state tables, for maks 600 users, will 16GB ram do? And will uPnP still work with that many users, 60k+ ports available, but is it enough?



  • why'd you want 200 public ip's ?

    600 users can run on far less then 16gb of ram.



  • We can get about 200 ips, not enough for each participant to get their own public ip. So, how many do we need, surely 600 users through just one public IP will be a problem?

    Ok, I've read about users filling state tables with far less users, how much ram do I need? Also, since this is a lan-party the network usage is heavy with lots of (probably) p2p traffic and pretty far from an office environment.


  • LAYER 8 Global Moderator

    I don't see why you couldn't just create your manual outbound nats to distribute your lan space across your public IP space.

    If you can do ipv6 - do that, now every client could have their own public IP and you have no need to nat anything.



  • sideout has hosted several lan parties with pfsense, and has a good traffic shaping guide:

    https://forum.pfsense.org/index.php?topic=99503.0

    I'd reachout to him and chat about things he's learned using pfsense hosting lan parties.



  • @heper:

    why'd you want 200 public ip's ?

    600 users can run on far less then 16gb of ram.

    Some games will not allow you to host more than a certain number of servers on a single IP.

    E.g. Battlenet (Warcraft 3) has a limit of 6 game hosts per IP last I tried.

    If this is a big event, or a publicized event with hosted streaming servers, I'd be far more concerned about DDoS attacks on the main line(s) than whether pfSense can hold up to the load from the clients. A decent Core-i quad core with 8GB of ram will probably be more than sufficient just for the load. Repelling DDoS is another issue on its own.



  • @johnpoz:

    I don't see why you couldn't just create your manual outbound nats to distribute your lan space across your public IP space.

    If you can do ipv6 - do that, now every client could have their own public IP and you have no need to nat anything.

    There is a feature in pfSense to add an address pool on the wan side and various methods for distributing this to LAN (via NAT), round robin sounds like a good option.  But how that works (or doesn't work) with ipv6 I do not know.

    Ipv6 is fine for computer clients, but some consoles, like the Xbox 360 doesn't do ipv6. Currently 17 people join with 360s, some other older consols include the Wii (7 people), PS3 (29 people). Haven't checked them for ipv6 support.

    @dreamslacker:

    @heper:

    why'd you want 200 public ip's ?

    600 users can run on far less then 16gb of ram.

    Some games will not allow you to host more than a certain number of servers on a single IP.

    E.g. Battlenet (Warcraft 3) has a limit of 6 game hosts per IP last I tried.

    If this is a big event, or a publicized event with hosted streaming servers, I'd be far more concerned about DDoS attacks on the main line(s) than whether pfSense can hold up to the load from the clients. A decent Core-i quad core with 8GB of ram will probably be more than sufficient just for the load. Repelling DDoS is another issue on its own.

    That's old-school Battle.net, the new one has no IP limitations (according to a battle.net forum).


Log in to reply