Performance issue on alix2d3/1.2.3 release
-
hello,
after solving the dhcp vendor identifier, i spotted what may be a performance issue.
i currently have an internet access on fiber optics, which provides about 80Mbps/40Mbps of bandwidth, much more than my previous internet access. it's using dhcp with option 60, and it's connected to a fiber optics to ethernet sagem media converter.with the isp box, I get the previous bandwidth between lan/wan interface, but with pfsense it's a different story:
22/13Mbps when using interrupt mode
80/6Mbps when using polling modeso i went to test with a computer on the wan interface, and i get almost the same result. moreover, i get a quite high packet loss rate (0.2-0.5%) in both cases… so i'm beginning to think the via driver may be broken. it didn't appear when I was using adsl (pppoe, 15/1Mbps).
i'll do some more testing with monowall, and i'd like to with pfsense 1.2.1 since it's based on freebsd 6 rather than 7, but I can't manage to find it. does someone have it somewhere? I really don't want to switch to another software platform because pfsense is really the best for my use.
thanks for the help.
self answer : http://files.pfsense.org/mirror/downloads/old/
-
tested right now with 1.2.2, no change interrupt/polling
lan ftp
in > 45Mbps
out > 90Mbps
I really wonder what's going on. there must be something going on with vr(4).with internet access
in > 22Mbps (vs. 90Mbps with the isp box), capped the same way
out > 25Mbps (same value, redid the test)isp box result
-
Are you using the traffic shaper? I have noticed that the traffic shaper seems to limit an Alix2d3 I'm playing around with to 20mbit, even when configured for 50Mbit.
Also, the max speed I can route through my alix is 60Mbit, I doubt it can even hit 90Mbit. Take a look at this thread.
http://forum.pfsense.org/index.php?topic=12766.msg69390
Josh -
My net5501 (same CPU, not sure about the via NICs), shows CPU usage around 40-50% doing 40mbps. You may not be able to do 90mbps consistently through the Alix, but certainly more than 25mbps. What does top tell you when testing throughput?
I know m0n0wall had lockups on Alix boards until they patched the vr(4) driver in the latest release. I have no idea if the same issue affected pfsense, or if it might manifest as slow throughput.
-
the trafic shaper is off, and considering the amount of bandwidth I have, there is no real reason to keep it on.
my point is that depending on the pfsense release, using the same configuration, i get different results when it comes to reliability and performance. I'm aware the alix cannot handle 90Mbps constantly, but I'm worried about the fact that it loses packets using the 1.2.3 and not with the 1.2.1…
I'll have to check out what top tells. for now i'm sticking to 1.2.1 considering i didn't get any weird behavior since my last post.
thanks for the hints.
-
Also, the max speed I can route through my alix is 60Mbit, I doubt it can even hit 90Mbit.
hm - my Alix + 1.2.3 combination can handle 90 Mbit (see attached):
-
I'm having performance issues too as describe in: http://forum.pfsense.org/index.php/topic,32383.0.html
When my WAN side is running 111/11 mbit I am able to get a throughput around 25/3. When I re-configure it to run 50/5 it runs at full speed. It seems that TCP timeouts might be an issue which was suggested to me in the thread. But I really don't see why people reports so varying performances on the same hard-/software. Of course it might be configured differently - but it does not sound so…
-
ghm > any tweaks? what extra services do you use?
-
@sirjeannot:
ghm > any tweaks? what extra services do you use?
Pretty much stock config - nothing fancy. Extra services are: Siproxd, blinkled (LED assignment) and an OpenVPN config - nothing spectacular but no performance killers such as Snort either (don't need it, have only very few open ports).
One aspect that might be important re config: I need VLANs both on the WAN side (my ISP expects tagged frames) and for the various LANs I have. It seems important that once you use VLANs on a given port to no longer assign the parent port to a network but only the VLANs. E.g. If you have LAN11, LAN12 - don't just tag/assign LAN12 to VLANID12 (aka eth1.12) and LAN11 to eth1. Nothing should be assigned to eth1 if there are VLANs on that port. Hence LAN11 must become a tagged VLAN as well and assigned to e.g. eth1.11.What are you CPU / Mem loads like when you run into the maximum throughput?