Some questions
-
if possible to have managed gig switches, then you could use also lacp to have multiplied speeds between switches.
-
By that you mean to have several Gigabit connections through one cable to the switch(es)?
-
If you do not have different subnets then the pfsense is only effected for internet traffic. This means you you have a pfsense machine which can handle your 300-400MBit/s internetspeed. nothing more.
The rest depends on the switches. What Metu69salemi is meaning with LACP that you connect the table switch and the backbone switch with two (or more) gigabit cables and aggregate them. But this has to been supported by the switches.
Try to search wikipedia for LACP.
-
Aha, I see. Multiple cables between the switches and the backbone.
That would be hard, because all the table switches is not managed, neither is the backbone, but that I think I can get hold of one if needed.
Anyways, would it be better for the LAN if I diveded the network in two parts?
Like, one backbone for the left side and one for the right side, and then both connected to the main backbone?
-
The LACP xould only be used if BOTH switches are talking this protocol.
So if you only have unmanaged switches and just on managed this will make no sense.if you divide it into to parts, left and right, that it will only increase speed if the source and destination of the packets will be on the same site. if the need to cross both backbones it will be slower and higher delay becaus the frames need to pass both backbone switches which will increase delay.
bets would be to connect each table switch with one seperate cable to the one and only backbone switch. and the game servers connected to the backbone switches, too. this is the best you could do with your hardware.
-
It is probably better to stick to a single subnet for the LAN side. Segregating them would mean that the pfSense box will need to route between the subnets when clients from separate subnets try to communicate with each other, raising the load across the pfSense unit.
Not to mention, certain traffic like broadcast traffic (when games try to find host(s)/ server(s)) wouldn't cross the subnets for LAN gameplay.As to physical segregation, a single large switch as the main would be better than trying to split down to multiple tiers if you only have a single gigabit link upstream from each table switch (since unmanaged switches won't have trunking capabilities).
For a network on that scale, you'll probably need a single 48 port gigabit as the main trunk and 16/ 24 port gigabit switches at the tables.
-
How is it possible to have a single subnet for 320+ computers?
You're suggesting that everyone attending the LAN party would be given IP's within one subnet and the rest computers in an another subnet (like gameservers? the pfSense box? filesharing servers?)?Okay, so it wouldnt help to split the network.
And, do you think a 48 port switch is better because of the number of ports or because of the capacity of the switch?
Again, thanks for the help!
-
@Alf:
How is it possible to have a single subnet for 320+ computers?
You're suggesting that everyone attending the LAN party would be given IP's within one subnet and the rest computers in an another subnet (like gameservers? the pfSense box? filesharing servers?)?Okay, so it wouldnt help to split the network.
And, do you think a 48 port switch is better because of the number of ports or because of the capacity of the switch?
Again, thanks for the help!
Use a Class A subnet (so to speak). That would allow for way more than 254 IPs in a subnet. Everything can be placed in the subnet in such an instance. e.g. 10.0.0.0/8 subnet would have all IPs from 10.0.0.1 - 10.255.255.254 on the same subnet.
You don't need to use this but I find it easy for large lan gaming deployments to do so because of the ease of punching in values: Mostly 1 and 0 plus the subnet mask is simply 255.0.0.0. Further segregation by table or blocks (for humans) can be obtained by varying the 3rd octet. i.e. 10.0.1.X for table 1, 10.0.2.X for table 2 and so on…A 48 port is probably what would fit your needs. Assuming you'd have 24 ports at each table, of which 2 - 4 are reserved for uplink/ spare, that leaves you with 20 - 22 ports.
With 300 - 400 computers, we're looking at up to 20 of these switches. Which means if you have a single 24 port for the trunk, you only have 4 ports left for the servers and router etc. Hardly enough.OTOH, a 48 port gives much more switching capacity and critical resources like servers and such should never be starved of switching bandwidth as would be the case if you had all of them on a separate switch connected to the trunk via a single gigabit link.
Furthermore, if you were to use 16 port switches at the table, you can comfortably have 30 of these connected to the 48 port trunk. This gives more available upstream bandwidth per PC since you now have 12 - 14 pcs per gigabit uplink compared to 20-22 pcs per gigabit uplink. Of course, price per port goes up with a 16 port over a 24 port and it multiplies even more so you need to work this out.
If you can ensure that all your clients aren't going to be sharing files over the network, then 10/100 switches with gigabit uplinks like the SRW224G4 would suffice at the tables for general gaming, LAN or internet.
-
Use a Class A subnet (so to speak). That would allow for way more than 254 IPs in a subnet. Everything can be placed in the subnet in such an instance. e.g. 10.0.0.0/8 subnet would have all IPs from 10.0.0.1 - 10.255.255.254 on the same subnet.
You don't need to use this but I find it easy for large lan gaming deployments to do so because of the ease of punching in values: Mostly 1 and 0 plus the subnet mask is simply 255.0.0.0. Further segregation by table or blocks (for humans) can be obtained by varying the 3rd octet. i.e. 10.0.1.X for table 1, 10.0.2.X for table 2 and so on…A 48 port is probably what would fit your needs. Assuming you'd have 24 ports at each table, of which 2 - 4 are reserved for uplink/ spare, that leaves you with 20 - 22 ports.
With 300 - 400 computers, we're looking at up to 20 of these switches. Which means if you have a single 24 port for the trunk, you only have 4 ports left for the servers and router etc. Hardly enough.OTOH, a 48 port gives much more switching capacity and critical resources like servers and such should never be starved of switching bandwidth as would be the case if you had all of them on a separate switch connected to the trunk via a single gigabit link.
Furthermore, if you were to use 16 port switches at the table, you can comfortably have 30 of these connected to the 48 port trunk. This gives more available upstream bandwidth per PC since you now have 12 - 14 pcs per gigabit uplink compared to 20-22 pcs per gigabit uplink. Of course, price per port goes up with a 16 port over a 24 port and it multiplies even more so you need to work this out.
If you can ensure that all your clients aren't going to be sharing files over the network, then 10/100 switches with gigabit uplinks like the SRW224G4 would suffice at the tables for general gaming, LAN or internet.
Amen
-
Okay, thanks for the good respones, it helps a lot.
The only thing Im wondering about now is HOW I can give table 1 10.0.1.X and table 2 10.0.2.X as dreamslacker said.
And lastly, would a switch like this http://h10010.www1.hp.com/wwpc/ca/en/sm/WF06a/12883-12883-4172267-4172280-4172280-4220258.html
- Model J9561A
Would that switch be enough to serve as the backbone for the 300+ computers?
-
I wonder how you deal with load-balancing of traffic over multiple WAN links at such temporary "event"-type installations (LAN party, convention etc), particularly if you can't be sure about the applications used (check discussion at http://forum.pfsense.org/index.php/topic,1294.msg7690.htm & http://forum.pfsense.org/index.php/topic,41103.0.html)
Using BGP would be an option, but it's quite a bit of work for just a temporary setup lasting 1-3 days …
-
@Alf:
Okay, thanks for the good respones, it helps a lot.
The only thing Im wondering about now is HOW I can give table 1 10.0.1.X and table 2 10.0.2.X as dreamslacker said.
And lastly, would a switch like this http://h10010.www1.hp.com/wwpc/ca/en/sm/WF06a/12883-12883-4172267-4172280-4172280-4220258.html
- Model J9561A
Would that switch be enough to serve as the backbone for the 300+ computers?
The switch will work fine (as will most unmanaged switches from other companies).
You assign the IP address by tables to the individuals in the party. It's best to simply print/ write out cards at each seat stating the IP address the client should use. This allows you to track users and narrow problems down more rapidly.
-
I have a completly different question now.
We have plugged everything up and everythins is running smooth. The internet connection is constantly maxed at 100Mbit/sec.
The only problem is that we cant connect to any games using Battle.net.
Do you have any idea what it can be? We have all ports open, nothing is blocked and no traffic shaper.When we try to connect to WoW we're only getting instantly disconnected. WTF?
Edit: there is like one constantly playing WoW, everyone else cant connect
-
The weird thing is that 20% can log on to WoW and the rest cant even log into any service using Battle.net.
-
Hi,
I am not playing WoW but it seems that you have to do Port Forwarding:
http://portforward.com/cportsnotes/battlenet/wow.htm
But remember that you should have to port forward to the complete subnet not only on client so that everyone can play WoW.
-
@Alf:
I have a completly different question now.
We have plugged everything up and everythins is running smooth. The internet connection is constantly maxed at 100Mbit/sec.
The only problem is that we cant connect to any games using Battle.net.
Do you have any idea what it can be? We have all ports open, nothing is blocked and no traffic shaper.When we try to connect to WoW we're only getting instantly disconnected. WTF?
Edit: there is like one constantly playing WoW, everyone else cant connect
You need to forward ports per client for battlenet. i.e. 6112 on wan to client 1, 6113 to client 2, 6114 to client 3 etc.
This is why it is important to assign static LAN IPs for each client. You then notify each user as to the port they need to setup the bnet client for (default 6112).IIRC, there is a limitation of 6 hosts per IP (WAN) for battlenet (at least for WC3/ Dota this holds true). You must setup the NAT so that each group of clients goes out a certain WAN IP for Bnet. This is best done with Manual Advance Outbound NAT (AoN).
i.e. You allocate 6112 to 6127 per table (with each client using one of the ports).
Then under AoN, you set the NAT so that ports 6112:6127 for 10.0.1.X (table 1) are mapped to one of the WAN gateways, 6112:6127 for 10.0.2.X (table 2) for another WAN gateway so on and so forth. Make these static ports.This does not negate the 6 hosts per IP limitations but you shouldn't be having that many hosts on bnet to begin with. For regular WoW access, this should not be an issue so long as all the clients have unique ports.