Replacing Cisco 1811
-
Hi guys, I'm preparing to replace my Cisco 1811 with a pfSense box.
Currently I have a Cisco 1811 on a university connection (100mbps full duplex on the wan side)
I've found while running NAT, SPI, IPsec (256AES), a rather large ACL, and some other goodies, I get very high CPU usage on the Cisco 1811 due to interrupt requests after 60 mbps, resulting in unstable and inconsistent transfer speeds, high latency, and lost packets.I would like to install pfSense on a Dell Optiplex 745 SFF machine, specs are as follows:
Intel C2D E6300 1.86GHz
1GB DDR2
Broadcom 5754 Gigabit Ethernet (onboard)
Intel PWLA8391GT Gigabit Ethernet (32-bit PCI 2.3)What sort of performance can I expect with the pfSense install, given the same setup the Cisco 1811 is running?
Ideally I would like to be able to route everything at full wire speed (100mbps) without any latency or loss. Am I expecting too much?
Thanks!
-
You might have to just try it and see. If your traffic is predominantly small packets and they predominantly have to be matched against a very large number of rules then the per packet overhead might be high enough that the box struggles to meet your goals. On the other hand, if your traffic is predominantly large packets that generally have to be matched against a small number of rules then the box could well handle it with ease.
Trying it would also help you to check that pfSense has all the features you need.
-
I would like to see 500,000 pps, is this achievable?
-
http://www.pfsense.org/index.php?option=com_content&task=view&id=52&Itemid=49
That doesn't have pps rates, but in the book we just say for 500k pps it would need to be the fastest quad core processor at the time the book was written, which was last year. So I'm not sure a C2D would get the job done, but an i7 (maybe an i5) or Xeon equivalent might be.
-
500K PPS is awful high for a 100Mbit FD connection. That gives you an average packet size of 52 bytes (including the header). I'd think that a more reasonable WCS would be about 250 bytes/packet, and I tend to use 750 bytes when trying to determine the typical PPS (which is still probably low).
EDIT: I just looked it up, the 1811 is rated at 100Mbit/s through the Firewall w/ 1400 byte packets. If that's the case, your typical packet size is probably around 840 bytes.
-
True, that would be overkill for just 100Mbit. I missed that bit before.
You could probably saturate 100Mbit both ways with an Atom. (Search the forum here for "lanner" - there is a thread with throughput tests for an Atom d510)
-
OK, I did a fresh install of pfSense 1.2.3 on a 500 mhz Celeron with 256mb ram, was able to download at a rate of 70 mbps @ 5.6 kpps in and 2.9 kpps out before the cpu was maxed out by interrupts. Essentially the same throughput as my Cisco 1811. We'll see in a few days with the C2D system, but so far it's looking good.
-
I installed pfSense on the C2D optiplex machine and the results so far are disappointing.
Currently I can upload at 98 mbps and 8 kpps, with the acks coming in at 2mbps and 3 kpps. This uses 13% cpu and 15% ram. Fine
But when I try to download at the same time, my upload drops off significantly:
Download at 97 mbps and 10 kpps, and Uploading at 61 mbps and 9 kpps. This uses 25% cpu and 15% ram. Whats the problem?
What is limiting my upload?
When I stop the download, my upload shoots back up to 98 mbps.Any ideas?
-
Are you trying this on pfSense 1.2.3 or 2.0? If you are on 1.2.3 you might have to disable TSO and LRO on those network cards. In 2.0 we do this by default.
-
What is limiting my upload?
A number of different factors could impact this. For example, are upload and download file transfers? Are they both hitting the same system? Does the target system have plenty of available CPU? Are the upload and download hitting the same drive? If so, contention for the disk heads may be introducing seek delays, especially if reads are favoured over writes and writes could have more overhead than reads if writes have to allocate space.
-
To answer a few question:
I am running 1.2.3
Both uploads and downloads are DC++ file transfers
The uploads are all occurring on one machine
The downloads are all occurring on a different machine
Behind the pfSense machine is a gigabit switchWhat is TSO and LRO?
-
TSO is TCP segmentation offloading, and LRO is large receive offloading. There are some driver bugs which, when used on a card that supports them, can cause degraded performance that isn't otherwise explainable.
A better test would probably be to run a 2.0 beta snap on there, it would probably benefit from the newer underlying OS and drivers and such.
-
I upgraded to the latest 2.0 beta, haven't yet checked the bandwidth issue yet but I noticed my IPSEC tunnel is now broken.
Here are the pfsense logs:
Oct 15 16:24:04 racoon: ERROR: fatal parse failure (1 errors)
Oct 15 16:24:04 racoon: ERROR: /var/etc/racoon.conf:46: ";" syntax error
Oct 15 16:24:04 racoon: INFO: Reading configuration from "/var/etc/racoon.conf"
Oct 15 16:24:04 racoon: INFO: @(#)This product linked OpenSSL 0.9.8n 24 Mar 2010 (http://www.openssl.org/)
Oct 15 16:24:04 racoon: INFO: @(#)ipsec-tools 0.7.3 (http://ipsec-tools.sourceforge.net) -
That's a new one… you might start a separate thread for that though so this one doesn't get too far off track.
It would be good to know exactly what settings are used on the tunnel, and the exact contents of /var/etc/racoon.conf
-
OK, I started a new thread with the pfSense 2.0 IPSEC problems
I've done some more bandwidth testing and it appears the problem still exists after upgrading from 1.2.3 to 2.0 BETA.
The uploading and downloading are being done on separate machines behind the NAT from multiple machines in the WAN.
I can upload fine at ~100 mbps while only receiving the acks, and I can download fine at ~100 mbps while only receving the acks, but when I attempt to do both at the same time my upload is throttled to ~60 mbps such that only my download reaches ~100 mbps. It appears pfsense is limited to 10 kpps in any given direction. -
I moved back to 1.2.3 and changed the network card
I am now running an Intel PWLA8492MT Dual Gigabit NIC (PCI) and have both my WAN and LAN connections plugged into this, I am no longer using the onboard Broadcom NIC.
I am still experiencing the same problem. Any ideas?
-
Since you are uploading to one system and downloading from another system how about swapping the roles to see if the the slower speed moves with the upload role or stays on same system?
-
There are 12 machines on the wan side and 2 machines on the lan side. 6 are downloading from 1 machine on the lan side. The other 6 machines are uploading to the other machine on the lan side. Ive tried changing the roles of the 2 lan machines and the 12 wan machines, but it doesn't make any difference.
Is it possible the bottleneck is the PCI bus?
Is anyone running pfsense at 100 mbps FD using PCI nics? -
Is it possible the bottleneck is the PCI bus?
The PCI bus clocks at 33MHz, 4 bytes per cycle. A transfer requires an address cycle then a data cycle. So transfer of 4 bytes requires 2 cycles giving a maximum rate of 16Mhz * 4 bytes = 64Mbytes/sec or about 512Mbits/sec.
Your transfer requires a bit more than 100Mbps X 4 (100Mbps in and 100Mbps out per transfer direction plus ACKs to keep the traffic flowing plus protocol overhead).
The bus won't be fully utilised because its shared and there will be gaps while devices pause to allow other devices to acquire the bus.
In addition to the basic mode I have described, some devices can operate in burst mode in which they use multiple data cycles per address cycle. I would expect Intel Gigabit NICs would use burst mode. Maybe you are limited by bus capacity. I don't know how you could check that without expensive equipment. More modern systems providing PCI Express buses have considerably higher i/o bandwidth than systems with a single PCI bus.
There are 12 machines on the wan side and 2 machines on the lan side. 6 are downloading from 1 machine on the lan side. The other 6 machines are uploading from the other machine on the lan side.
Did you mean "uploading to" rather than "uploading from"? I find the description as written suggesting all the transfers are in the one direction (the two systems on the LAN sourcing all the data).
-
Yeah, i misworded, I'm attemtping to bring 100mbps in and send 100mbps out.