SOLVED: Extremely Slow Upload
I've recently come across a strange issue with pfSense whereby it's severely limiting my upload speeds. More specifically, I used to have 6M/800k DSL speeds and recently upgraded to 25M/7M DSL. Despite this, the upload speeds usually sit around 500k up on speedtests. Download speeds correctly run at 25M.
I did used to have the traffic shaper enabled and running but no longer need it now as there is more than enough bandwidth to satisfy everyone. There are no limiters and traffic shaper has been removed.
I'm sure this is an issue with pfSense because when I dial the modem out directly from Windows Server I achieve full upload of ~6.96M.
Has anybody ever come across this issue before or have any suggestions as to what to look into? I couldn't find much concrete information about this issue online…
And before I forget, I'm on 2.0.1-RELEASE in a virtual environment.
After you removed the shaper, have you rebooted? Sometimes this is necessary. Also did you check /tmp/rules.debug to see if there are traffic shaping still left over? Did you use limiters at all?
Yes, I did try rebooting - when in doubt, reboot ;D
I just looked at /tmp/rules.debug I couldn't see anything particularly noteworthy in there. I searched for the words "traffic", "shaping", and "limit" and these were the only lines that contained any of those words:
set limit states 72000
set limit src-nodes 72000
The rest of them just seem to be my firewall and NAT rules for the most part which, in reality, shouldn't really cause any issues as far as I'm aware.
Any other suggestions? I might just perform a fresh install, test that and then import the original configuration
So, I put together a completely fresh pfSense installation an swapped virtual machines… Same behavior!
With that in mind, does anybody have any idea why a fresh install of pfSense 2.0.1 would be limiting upstream performance from 6.96Mbps to ~2.16Mbps?
Slow upload speed is the telltale sign of a duplex mismatch, that may be the case. The Windows server you mentioned trying, that in a VM on the same hypervisor using the same NICs?
Thanks for the reply cmb. In the test I performed, I dialed it out directly from the host system, but the physical hardware configuration was not changed in any way.
Following your suggestion, I forced the NIC to 100Mbps Full Duplex but now am not sure how to check the "virtual" duplex negotiated inside of the VM. Normally I'd use ethtool but that doesn't seem to be in this distro.
Duplex mismatch wouldn't be inside the VM most likely, it'd be from the host to whatever it's plugged into. And be careful messing around with forcing duplex, what you're plugged into must be set the same way. Check the NIC info on the hypervisor, what it's negotiated to, whether it's seeing errors, etc.
I suppose the reason that I forced the duplex on the NIC was because it was just the easiest way to verify that it was being negotiated in full duplex. That is to say that in auto-negotiate it would be difficult to tell (Windows has no real way of finding current duplex setting without vendor-specific software afaik). My thinking was simply that if I force 100Mbps Full Duplex in the driver and it worked at all, then I'd know definitively that the hardware was negotiated full duplex (otherwise I'd have lost the link).
In any case, looking at the modem logs show this wrt the connection to the hypervisor's physical NIC: Link 1 Up - 100Base-TX Full Duplex
The network topology is really pretty basic and goes something like this:
Modem <–> NIC <--> VMnet2 bridged <--> Virtual adapter
A PPPoE connection is negotiated through that link. I know it's a little bit of a longshot, but is it still worth looking into duplex issues within the VM? Thanks for the help with this issue, I really appreciate it!
My thinking was simply that if I force 100Mbps Full Duplex in the driver and it worked at all, then I'd know definitively that the hardware was negotiated full duplex (otherwise I'd have lost the link).
No, if you force and the other end is on auto, the other end sees no negotiation and per the spec has to assume it's connected to a device that doesn't support duplex negotiation, which requires half duplex. So your forced end has full, the other end has half, it's up and functional but the resulting duplex mismatch will slow things down considerably especially on the upload.
I've never seen or heard of any kind of negotiation issues within VMs, that's highly unlikely, would just leave everything to auto that's in VMs. The connection from the VM host to the physical device is where that problem would reside if anywhere, generally residential grade equipment will always be auto, so make sure the NIC on the host is set that way as well. Since the server itself is fine, that in general should rule out any problems along those lines. That likely leaves the issue somewhere in the VM networking or something related, which is a tough one to troubleshoot without getting on the box and seeing what's happening at the VM and host level. Analyzing pcaps from the host and the VM firewall is the next step.
Have you considered trafficshaping on the portgroup on the host could be a factor??
I am pleased to report that I have discovered a solution. Thanks to everybody for eventually pushing me towards a solution!
In the end, this issue was caused by a configuration in the physical network that VMware did not like and, in fact, all VMs were suffering from poor network performance.
More specifically, the LAN side of the host system was running a teamed connection using 802.3ad protocol. This really served no purpose other than for my own vanity.The solution was simply to take apart this teamed link and run the gigabit ethernet ports individually while disabling all unnecessary adapters and services. I also followed as many recommendations as I could in the following document that my hardware allowed: http://www.vmware.com/pdf/ws7_performance.pdf
Again, thanks to everybody that pitched in on this problem and pushed me in the right direction, it is greatly appreciated!