Good throughput?
-
Hi
I've got pfSense running on a 3.2 Ghz Pentium 4, with 1 GB of DDR2 533 mhz ram.
3 x 1 Gbit NICs. (Don't know the maker tight now because I'm not at home.)I've done a bit of testing and I get about 79 Mb/s throughput through the pfSense box. Without it I get about 94 Mb/s.
Is this a normal number for a 100 Mb/s network? The CPU hardly every goes over 10% och the ram usage is round 11-15%. -
79 seems a bit low. 94 is what I would expect. How are you testing throughput? Do you need 94 mbps through your firewall?
db
-
I was using iperf, one computer on the WAN side and one on the LAN side. I tried the 10 second default a few times, then a few 30 seconds. All came back 79.xx Mb/s.
Yeah, I'm going to get 100/100 Fiber line soon. And I want to use it all of course.
-
How is your latency? I found enabling device polling increased latency through the box from <1ms to 6-9ms. Not sure that would drop your throughput like that, but worth trying.
You might also try changing NICs if that's an option, or at the very least, if your 3 nics are different you could try reassigning them to see if any pair in combination gives you better performance.
You could also play with flow control. Sometimes disabling it will improve throughput on a network (but if not, then I would leave it enabled).
Also, have you checked your logs? Sometimes interrupt moderation will show up there, in which case device polling might help.
Check your traffic shaper too. A parent queue set too small will choke your link. Don't trust the parent queue size to be super-accurate. If you have it set for 100 mbps, experiment with setting it to 200. If that improves throughput then bring it down slowly to the point where the queue is backlogged just slightly. You want your default WAN queue to be just slightly smaller than your link speed and you'll have to experiment to find that spot.
db
-
I don't have device polling enabled. I haven't touched the flow control or the trafic shaper from standard.
And I've checked with the command top and interrupt isn't a problem at all.Ps. I've tried changing the NICs and the only different is that I get 77.xx Mb/s instead.
The NICs are 2 x Realtek (8169 I believe) and 1 x D-Link GA 311 -
I refer you to this entertaining post:
http://osdir.com/ml/linux.redhat.fedora.testers/2003-11/msg01199.htmlBasically, realtek NICs are known for their poor performance, which usually translates to higher cpu usage and sometimes higher latency. cpu usage is not a factor in your case, and I have my doubts about latency.
I don't know about that specific D-Link model, but many D-Link NICs are realtek inside.
Poor hardware is usually poor in all respects, and if you have the chance to replace those NICs with something of a higher quality (intel, 3com…), I can only speculate that you may see improvements.
db
-
Dont expect much more than 75 Mb/s from a Realtek card.
These cards really are just crap.Usually Intel cards perfom a lot better (close to the theoretical maximum)
Also if you are testing close to the maxium of the hardware (router-side), make sure you also have appropriate hardware to test (userside).
A possible setup to test Gbit is:
1Gbit User 1Gbit User
\ /
\ /
Gbit-Switch
¦
¦
pfSense
¦
¦
Gbit-Switch
/
/
1Gbit User 1Gbit UserLike this you can make sure that the hardware of the users isnt at a possible bottleneck.
Or you take a testdevice which is designed to create 1Gbit load. (Altough they can be quite expensive).
-
This seems very low even for Realtek to me. In my experience the PCI-based 100mbit 8139 cards (rl driver) can do about 90mbps, while they generate a lot of interrupt traffic and CPU usage, as long as you prop them up with lots of CPU they can get decent throughput. I've seen much lesser hardware do better than this with them, though they get tragically bad on old P2-class hardware. On top of that, these cards are apparently the newer (and much better) design (re driver), and are GigE. While they're still not very good cards, I find it hard to believe that they can't even do 100mbit; I had one of these in an Atom single-core board carrying 4 VLANs that was able to pass about 20MB/s (300+mbit total) without much difficulty, and that box is weaker in all respects than this one.
I don't really have any suggestions, and as usual, I'll always recommend not to use these NICs if you can avoid it, but I suspect the problem is elsewhere.
-
I missed the fact that these are gbit cards. That really is unexpected then.
We need to narrow down the source of the bottleneck, remove as many factors as possible.
Are your test computers plugged directly to pfsense via a single cable, or is there a switch between? Remove the switches, if any, and connect direct. If your nics aren't auto mdi-x then they won't light up when connected and you'll have to use a crossover cable.
Are your test computers up to the task, i.e., gbit cards and ample CPU? Check cpu usage on both test computers during the test. Use a monitoring program that identifies iowait (some don't). For example, you should see >10% idle during the test on both iperf machines when looking at top.
If this test produces similar low throughput and low cpu usage you might consider installing the iperf package on pfsense and testing LAN and WAN throughput separately.
db
-
The thing is that it is Gbit cards but I'm only running them in a 100 Mbit/s network.
With the test computers I got 94 MB/s without pfsense and Around 79 MB/s with pfsense. -
Try the test without the switches, i.e., just use crossover cables:
test machine
¦
¦
pfSense
¦
¦
test machine