White-box hardware to push 1Gbps?
-
Hello Folks,
I'm looking for a whitebox vendor who would sell a reasonably priced 1U barebones server with Intel NICs onboard. I'd prefer something appliance size (19" depth) in a barebones so I can add a my own Pentium-D/Core 2 Duo/Core 2 Quad, RAM, and drives.
I'm not having much luck at Newegg, so I thought I'd ask here.
-
You should be able to get the Supermicro SUPERMICRO MBD-PDSBM-LN2+-O at newegg. They have a matching 1U casing (CSE-503-200B) for it too.
-
Hey, that's a great idea thanks. It looks like the MBD-X7SBL-LN1-O will fit also and will support the E5200 and 4GB of DDR2 that I already have. They even have a combo deal for $229 bucks for that MB and slightly different 14" Supermicro server case. Sweet!
Do you think a Pentium-D E5200 (2 x 2.5GHz) and the Intel 82573V can push 1Gbps if I use the SMP kernel? Would I be better off dropping in a Quad-Core at 2.5GHz or faster?
-
Hey, that's a great idea thanks. It looks like the MBD-X7SBL-LN1-O will fit also and will support the E5200 and 4GB of DDR2 that I already have. They even have a combo deal for $229 bucks for that MB and slightly different 14" Supermicro server case. Sweet!
Do you think a Pentium-D E5200 (2 x 2.5GHz) and the Intel 82573V can push 1Gbps if I use the SMP kernel? Would I be better off dropping in a Quad-Core at 2.5GHz or faster?
A Quad core wouldn't be much of a help unless you run other packages to utilize the extra cores. A faster Dual-core would be better but the E5200 should easily push gigabit speeds on it's own if you don't have much else running in the background.
BTW, you would need a flexible riser for your network card (the X7SBL only comes with 1 NIC; the PDSBM-LN2 comes with 2 Intel NICs).
-
Thanks dreamslacker, I wound up ordering the 2 x NIC version and the case you suggested. I'll use my E5200 + 4GB DDR2-800. I've got a 320GB 2.5" SATA drive and an 80GB drive I'll use as a backup. I may just mirror them because I can't ever imagine using more than 80GB of total space. I ordered a 3.5" to dual 2.5" cage in case I decide to go that route.
-
Do you actually plan to host something that can pull 1gbps???
I would not worry so much of the speed, but more how it handles a lot of packets….
-
Do you actually plan to host something that can pull 1gbps???
I would not worry so much of the speed, but more how it handles a lot of packets….
Eventually, yes. I'm starting a VPN service (https://www.trafficcloak.com/) and pfSense will be the firewall for my network. So throughput is the most important I would assume?
-
Yes but that hardware you are buying will not be able to handle that kind of traffic if its VPN…. Have you read the PfSense limitations for VPN??
-
Yes but that hardware you are buying will not be able to handle that kind of traffic if its VPN…. Have you read the PfSense limitations for VPN??
I have a Dell PowerEdge 1850 (2 x 2.8GHz OLD Xeons) running Windows Server 2008 for that. I've pushed 20Mbps of PPTP and SSTP traffic through it so far in my testing and the CPU hasn't blinked. If I start to have issues encrypting VPN traffic, I can just throw another server in and enable Network Load Balancing to balance the load between the two, or three, or four, etc.
I will eventually add another pfsense box running VRRP, but for now I want to make sure whatever I have in place can push 1Gbps so my clients aren't throughput limited whatsoever. I want to start small but have the ability to grow as I need, while offering a product worth what people are paying. :)
-
Another quick question. Someone is telling me that bridging is less efficient than routing in pfsense. Here are his exact words:
"While you’re partly right, it also has to do with the bridging code in the underlying BSD OS and how the cards need to be in promiscuous mode to bridge. In addition, the version of the pf (packet filter) in pfSense (and FreeBSD) is missing numerous performance improvements, some related to bridging, that have been made to the upstream pf (in OpenBSD).
In general, bridging tends to be more resource intensive—if your goal is a firewall, routing is almost always the better choice (unless you have no choice)."
That doesn't sound correct to me, is it?
-
There is just lightyears between 20mbps and 1gbps…...
Let me be honest. It would come as a big surpise if you reach 200mbps VPN traffic....
I have an ISA server handling my VPN, and if I push it, it will handle 130mbps... But only in peaks....Sustained traffic is around 100mbps..
And the tunnel is not encrypted.
Yes but that hardware you are buying will not be able to handle that kind of traffic if its VPN…. Have you read the PfSense limitations for VPN??
I have a Dell PowerEdge 1850 (2 x 2.8GHz OLD Xeons) running Windows Server 2008 for that. I've pushed 20Mbps of PPTP and SSTP traffic through it so far in my testing and the CPU hasn't blinked. If I start to have issues encrypting VPN traffic, I can just throw another server in and enable Network Load Balancing to balance the load between the two, or three, or four, etc.
I will eventually add another pfsense box running VRRP, but for now I want to make sure whatever I have in place can push 1Gbps so my clients aren't throughput limited whatsoever. I want to start small but have the ability to grow as I need, while offering a product worth what people are paying. :)
-
http://www.sonicwall.com/us/products/NSA_4500.html
This handles 1gbps VPN throughput, but its only measured at 1418byte packets….. normally you would encounter a packetsize equal to around 700byte average, and then the throughput drops to 500mbps..... Also make sure, that if you loadbalance, then the core switches should be able to handle that kind of traffic.
-
There is just lightyears between 20mbps and 1gbps…...
Let me be honest. It would come as a big surpise if you reach 200mbps VPN traffic....
I have an ISA server handling my VPN, and if I push it, it will handle 130mbps... But only in peaks....Sustained traffic is around 100mbps..
And the tunnel is not encrypted.
Perhaps I phrased my post incorrectly. I do not expect to reach 1Gbps of VPN traffic on a single box. When I hit the limit of that single server, I will simply add another server and load balance the two. When I reach the limit on two, I'll add a third. I know the box can push 1Gbps easily, as I've pushed 1Gbps through the server (across the internet) with RRAS/NAT already, but I do understand that encryption adds significant processing overhead.
What I am trying to avoid, is placing myself in a situation where I need to start sizing and replacing firewalls because they can't bridge and firewall 1Gbps of traffic. I'd rather get that taken care of now as it's the unknown in my equation. I have worked with RRAS since Windows 2000 so I'm very comfortable with what I'll be able to push through it and how to upgrade it with no downtime. CPU usage due to VPN encryption scales rather linearly, at least with RRAS, so my 20Mbps baseline gives me a rough idea of how much I'll be able to push through the box.
Here's a great read from Microsoft on RRAS performance: http://blogs.technet.com/rrasblog/archive/2009/02/09/rras-performance-results.aspx
In short, on an 8-core 2.1GHz Opteron machine, pushing 650Mbps from a single VPN client only utilized 40% of the available processor time. Accounting for the older technology of my 1850, your 200Mbps number is likely pretty close to accurate. The more important numbers are the sustained throughput with a 1000 VPN client load however. As you can see, 1000 clients pushing 100Mbps uses 13% (PPTP) or 33% (SSTP) of the available processor time. While those are numbers from a lab test under ideal circumstances, it provides a rough idea of how many clients I will be able to support before I need to start adding additional CPU power.
I am puzzled by something you said however – how do you have a VPN tunnel that is not encrypted?
-
Try to setup a test scenario….Hack up a 1GB file and transfer the file via VPN over the Pfsense box....
It is not the servers behind the firewall thats the problem....I would loadbalance the ISA as well if I encounter congestions. But its your PFsense box, that would be causing the bottleneck....
-
I received the parts and built the machine today. I can push 950Mbps to 980Mbps via iperf from client to client through pfSense in bridge mode at 25% CPU usage (50% 1Gbps bidirectional) using the following specs:
Intel Pentium Dual Core E5200 (2.5GHz)
4GB DDR2800 (2GB works just as well)
Supermicro X7SBL-LN2
Intel 82573V & 82573L PCI-E NICsI'd like to throw a boatload of packets at it, but iperf doesn't seem to be designed for that as the most I can get it to pass it about 65k pps. Does anyone have any ideas for how to pound it with packets, somewhere in the million pps range?
-
I'd like to throw a boatload of packets at it, but iperf doesn't seem to be designed for that as the most I can get it to pass it about 65k pps. Does anyone have any ideas for how to pound it with packets, somewhere in the million pps range?
Reduce the size of the packets using the -l argument. 65k pps for a gigabit link would suggest that your packets are close to 2kBytes.
-
I'd like to throw a boatload of packets at it, but iperf doesn't seem to be designed for that as the most I can get it to pass it about 65k pps. Does anyone have any ideas for how to pound it with packets, somewhere in the million pps range?
Reduce the size of the packets using the -l argument. 65k pps for a gigabit link would suggest that your packets are close to 2kBytes.
I'll give that a shot, thanks.