Snapshot from Aug 26 16:51:36 EDT 2013 - Slow IPv4 traffic
-
Hi pfSense,
I have now tried several snapshots since: Wed Jun 12 06:19:17 EDT 2013 and I keep having problems with them. All snapshots between Jun 12th and Aug 26th I haven't been able to get IPv4 traffic to work at all, since Aug 26 16:51:36 EDT 2013 IPv4 traffic is working, but the speed is 1-3Mbit. If I then revert to my old backup from Wed Jun 12 06:19:17 EDT 2013 all is fine, my IPv4 speed is as shown in my signature.
Any suggestions on how to proceed? Is there something I can provide that will help understand this better?
Edit: IPv6 traffic has been running perfectly on all snapshots.
-
-
Sounds like a NIC configuration issue. Have you tried changing the TCP segmentation and large receive offload settings? How about MBUF usage? Speed, Duplex, and MTU settings maybe.
-
Sounds like a NIC configuration issue. Have you tried changing the TCP segmentation and large receive offload settings? How about MBUF usage? Speed, Duplex, and MTU settings maybe.
The NIC is a Fujitsu D2735-2 with Intel
chip 82576NS - Never had issues with it on the previous installs and as said when reverting to my backup there is no problems (Has the drivers been updated?). MBUF usage was not alarming. It's only IPv4 traffic affected, IPv6 is still running fast as before. So if it's only IPv4 traffic with issues, would it help changing away from Duplex and lock it at 1000Mbit.
-
Depends what the other side is configured as. If the other side is set to Auto negotiate then leave it at auto. If the other side is set to a fixed speed and duplex, then set your speed and duplex to match the other side.
I have Intel 1000PT NICs and a 100mbps WAN connection. I see the link usage peak in the 90s consistently.
Perhaps its doing TCP offloading to the NIC for IPv4 which it is not doing for IPv6.
-
Depends what the other side is configured as. If the other side is set to Auto negotiate then leave it at auto. If the other side is set to a fixed speed and duplex, then set your speed and duplex to match the other side.
I have Intel 1000PT NICs and a 100mbps WAN connection. I see the link usage peak in the 90s consistently.
Perhaps its doing TCP offloading to the NIC for IPv4 which it is not doing for IPv6.
I'll try disabling "TCP segmentation offload" and "large receive offload" tonight, don't want to do it while users are most active. And of course also set a fixed speed.
But something has to have changed in pfSense from June till August, since this problem does not appear in snapshots in June, but starts from July and forward. Else it does not make much sense to me.
ssheikh thanks this far! Appreciate your help.
-
1: Switched from Auto negotiation to fixed speed 1000Mbit Full Duplex.
MBUF Usage: 38% (9750/25600)
2.5Mbit down
1.0Mbit up- No visible effect.
–------------------------------------
2: Disable hardware TCP segmentation offload + Restart WAN interface.
MBUF Usage: 38% (9750/25600)
3.0Mbit down
1.0Mbit up- No visible effect.
3: Enable hardware TCP segmentation offload again and Disable hardware large receive offload.
MBUF Usage: 38% (9750/25600)
769Mbit down
8.63 up- Some effect, especially on download.
4: Disable hardware TCP segmentation offload again and Disable hardware large receive offload.
MBUF Usage: 39% (9926/25600)
763Mbit down
8.2 up- No remarkable change compared to the above.
Will revert back to my backup again and do a new test.
After recovery I tested:
Enable hardware TCP segmentation offload and Enable hardware large receive offload.
MBUF Usage: 9638/25600
727Mbit down
718Mbit up - No visible effect.
-
Hello,
Did you ever get a response? I am having the same issue.
-
Nope no response. But I have reverted back to a previous snapshot.
My guess is, it's a driver issue but I am not sure.
-
Hello,
Moved wan connection from Intel igb (PCI-E) to em (PCI-X) interface card and speed is now normal with the latest snapshot. Would like to help resolve the issue If someone can tell me what to look for. I also want to go back to all PCI-E for the interfaces.
-
I hope it will be fixed, I'll test your mod with changing from pci-e to pci-x.
-
I can see the problem still persists in 2.1-Release, and if I disable "Disable hardware large receive offload" the download rate is fine, but the upload is still limited to 7-10Mbit where with previous releases is lies around 700-900Mbit.
IPv6 is still fine both download/upload.
IPv4:
c0urier@mail-proxy:~$ wget -4O /dev/null URL/1000M.zip
–2013-09-16 13:18:42-- http://URL/1000M.zip
Resolving URL (URL)... 109.124.XXX.XXX
Connecting to URL (URL)|109.124.XXX.XXX|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576001 (1000M) [application/zip]
Saving to: `/dev/null'0% [ ] 758,752 126K/s eta 2h 6m
IPv6:
c0urier@mail-proxy:~$ wget -6O /dev/null URL/1000M.zip
–2013-09-16 13:18:52-- http://URL/1000M.zip
Resolving URL (URL)... 2001:470:28:XXX:XXX:XXXX:fe25:ca9f
Connecting to URL (URL)|2001:470:28:XXX:XXX:XXXX:fe25:ca9f|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1048576001 (1000M) [application/zip]
Saving to: `/dev/null'15% [==============================> ] 165,900,293 21.5M/s eta 32s
-
Okay, I might have found a solution for this.
Add:
dev.igb.0.enable_lro=0
dev.igb.1.enable_lro=0to /etc/sysctl.conf and restart the pfSense. This solved my IPv4 speed problem.