Jumbo Frames not forwarding between VLAN interfaces
Crusnik01 last edited by
1x LAGG hosting a number of VLAN interfaces. LAGG MTU set to 9000, all VLAN interface inherit this.
Jumbo Frames configured on all switches, and devices.
Noticed that pinging using large packets didn't work between hosts any more.
What has changed, is that I previously ran a single large network, so the switches tok care of all packets on the LAN side.
Now everything is as it should have been from the start, segmented into many VLANs, and all traffic is routed through the pfSense box.
Pingingen between the two hosts (PC -> NAS) with an 8k packet
C:\>tracert -dn 10.1.2.20 Tracing route to 10.1.2.20 over a maximum of 30 hops 1 <1 ms <1 ms <1 ms 10.1.3.1 2 <1 ms <1 ms <1 ms 10.1.2.20 Trace complete. C:\>ping -l 8000 -f 10.1.3.1 Pinging 10.1.3.1 with 8000 bytes of data: Reply from 10.1.3.1: bytes=8000 time<1ms TTL=64 Reply from 10.1.3.1: bytes=8000 time<1ms TTL=64 Reply from 10.1.3.1: bytes=8000 time<1ms TTL=64 Reply from 10.1.3.1: bytes=8000 time<1ms TTL=64 Ping statistics for 10.1.3.1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\>ping -l 8000 -f 10.1.2.20 Pinging 10.1.2.20 with 8000 bytes of data: Request timed out. Request timed out. Ping statistics for 10.1.2.20: Packets: Sent = 2, Received = 0, Lost = 2 (100% loss),
Now doing the same, but from the FW directly (pfSense -> PC, pfSense -> NAS)
[2.3.1-RELEASE][admin@fw]/: ping -s 8000 -D 10.1.3.2 PING 10.1.3.2 (10.1.3.2): 8000 data bytes 8008 bytes from 10.1.3.2: icmp_seq=0 ttl=128 time=0.861 ms 8008 bytes from 10.1.3.2: icmp_seq=1 ttl=128 time=0.807 ms ^C --- 10.1.3.2 ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.807/0.834/0.861/0.027 ms [2.3.1-RELEASE][admin@fw]/: ping -s 8000 -D 10.1.2.20 PING 10.1.2.20 (10.1.2.20): 8000 data bytes 8008 bytes from 10.1.2.20: icmp_seq=0 ttl=64 time=0.602 ms 8008 bytes from 10.1.2.20: icmp_seq=1 ttl=64 time=0.639 ms ^C --- 10.1.2.20 ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.602/0.621/0.639/0.018 ms
So jumbo frames is obviously working, just not when the packets are going between two interfaces on the pfSense box.
SoulChild last edited by
I once read that for certain chips, you can only go to 7422 MTU
Try setting the MTU to 7k and does the problem repeat itself?
SoulChild last edited by
But personaly, I think jumbo frames are not worth the hassle anyway. Especially for NAS traffic
Unless you're doing intensive iSCSI traffic, jumbo frames a usually a complete waste of effort. Historically, it was worth it on low speed links. But over gigabit and up, it's completely trying to optimize something that's working fine as is.
At least in my opinion.
"Jumbo Frames configured on all switches, and devices."
So your phones (wired and or wifi) and other wifi devices are doing jumbo frames? What about your TV or your DVR? What about your thermostat or your toaster?
While jumbo frames might be of some use on a SAN, or other layer 2 where traffic is not routed and takes advantage of the large MTU say vmotion or FCoE and the already mentioned iSCSI. Other than that I am with SoulChild on it being pretty pointless on the rest of your network.
Your printers support jumbo do they?
Have you actually benchmarked your applications using a standard mtu of 1500 and with jumbo. Many applications are never sending full data packets anyway. Lots of little packets on the wire, where jumbo doesn't do anything.
To your trunking traffic to a lagg. So hairpin, and you do understand that when 2 devices talk they are going to use only 1 connection in the lag. So a hairpin that /2 the bandwidth the available bandwidth on the physical interface for that conversation. So you think your jumbo is any real value here for moving large amounts of data?
Lagg, Port channel, etherchannel, etc. what ever you want to call it 1gig + 1gig does not = 2gig. It equals 2 1gig connections.
If you your looking for performance for intervlan traffic I sure wouldn't trunk the connection. You should put each vlan on its own uplink so that you don't hairpin. This prob going to give you way more bang for the buck then any jumbo frames.
If you need more than 1 gig, then have a bigger uplink. 10gig for example. Lagg to be honest is nice for mitigation of failed port or switch you set it up correctly. But as to giving you a fatter pipe not so much. And then you just hairpin anyway? Trunking and putting more than 1 vlan on the same connection is ok when the vlans on that connection don't want to talk to each other and only talk to other vlans on other uplinks, etc. But when devices going through the same uplink to where they can be routed to the other vlan on the same uplink you just /2 your possible bandwidth because of the hairpin.