8 Processor 7u box
hey guys..im probably(once i get my server shipped to my new house) going to be running Linux as main OS then PFSense inside vmware or Xen, the system will have 2 Dual port NICs (10/100) and 1 GBit nic, 1 rl0 nic aswell as 4 atheros Nics in AP mode PFSense will be running as main router as well as win98 in a HOST only network..need to access a old legacy parallel drive and then few other installs of linux (mysql Httpd asterisk, cluster center node etc) if anyone wants to give some advice now might be the time…here are the specs
1024MB of ram(can hold 16gb but still trying to find that much in 1gb sticks so i can occupy all 16 slots...needs to be ECC SDRam)
4x4gb SCSI drives RAID 5
2xUSB2.0 SATA300 750GB drive(software RAID most likely)
1xremote managment video card for remote access via DSL modem
2x24port 10Mbit hubs
2xFiber module for hubs
1x21 port 3com 10/100 switch
1x21 port linksys switch
this will probaly be for a City wide wifi network with 4 APs centered off of this node
rsw686 last edited by
This seems like an older server, what interface are those SCSI hard drives? I'm assuming there less than U160 as there only 4 GB. The performance from them is going to be mediocre for what you want to do. I have 3 9GB U2W drives 2 in RAID 1 and can only push about 20 MB/s max.
If you can mount those SATA 300 drives internally with a PCI-X adapter. At the very least go external SATA with PCI-X adapter. Load everything on them and ditch the SCSI drives. With VMware you need fast drives to handle all those O/S you want to run.
Plus you'll need atleast 2GB of memory. I have a Pentium D 3.2Ghz with 2GB ram and 2 160gb SATA 300 drives. I run pfSense developers on that using 192MB memory and Fedora 5 with 384MB memory. Memory usage in VMware shows 1GB. You add another guest and you'll be swapping on the hard drive. Also with the SATA 300 drives I have in RAID 1 they show a burst rate of 220 MB/s and sustained rate of 80MB/s. Thats 4 times the performance of the SCSI drives. Plus you'll have reliability.
They other thing I'll recommend is since you have 5 processors when installing the VMware guest's set them to use 2 processors each. This will give you better performance. Otherwise each guest will only have access to a 500Mhz processor.
Seems like you have lots of clients running off this server. Whats with the 10Mb hubs?? Do they have a 100Mb uplink port. If not there gonna be a major bottleneck. Plus there hubs and not switches and traffic on them will be visible to all. Not very good security wise since this is city wide. And hubs don't have the bandwidth capacity that a switch has.
thanks for the info..looks like ill be upgrading…10mbit hubs are mostly transcievers or run a long distance link which then goes off to switches...but yeah its a older server..no internal mountpoints so im thinking maybe PCI card to run the sata controller and then bunch of ram..the infrastructure in this area is..non existant to give it the most credit :P my system will probably be the bigger then the colleges here..mostly trying to get a WISP running so students and staff as well as local population has access to some higher tech stuff..all the hardware listed is in my possesion other then the 750gb drives...price: free..so im just hoping this will run..even if its pegged at 100% use all the time :P
rsw686 last edited by
Nice to help out and support the community. If the hardware's free, then use it and see how it works out. You can always swap out a hub for a switch with hardly any downtime. I've just been used to having 10/100 switches and never really laid hands on a 10Mb switch. I know I can easily max out a 100Mb connection with data and cable internet around here coming in up to speeds of 15Mbs down. That might not be the case in your area.
only way your gettin 15 here is uncapping and or buying a few t1s :P i know personally i could max a t3 but hell..dont think that ATT has large trunks in this area…ill probably be the biggest cost effective ISP..no one here wants to pay 40+ for a 6mbit line :P