Ping time outs on servers behind brigde
-
Hi Guys,
I have a problem. I have installed pfsense 2.1 amd64 and now i386 to make sure my problem is not software related.
The situation is : I have a WAN bridged to my DMZ ( because the WAN ip is in the same range of all my servers behind the DMZ)
When i put one server behind the DMZ all works fine, but when i put serveral servers behind the dmz time outs occur like ping . SO it looks like the more traffic through the pfsense firewall the more problems.
The bandwith avalailable is 100MB the pfsense uses 2 or 3.
It looks like if i get over 10000 states the error occurs … in the firewall settings is see this:
Note: Leave this blank for the default. On your system the default size is: 326000 so that should be oke... ( i tried to change this to 100.000 etc no result)
Fact is that the more traffic over the pfsense FW the more time out and problems i have....
Oke what i did;
reinstalled pfsense 2.1 now with version i386 in stead of amd64.
After install a added NO Packages.
I did a WAN -> DMZ setting and i did a LAN setting (3x NIC)
A added all 3 GitHub rules ( like mentioned before)
So Setup 1 non busy server behind DMZ looks oke , seconds one (non busy server looks oke) Then a third server with some more traffic and ping loss. more down then up. (load pfsense fw is 0.2)
If i put only the busier server behind the pfsense all is fine...
So the busier server behind the SOnicwall again and all goes smootly.
I did these changes Like steve mentioned in a other topic. :
Initially:
https://github.com/pfsense/pfsense/commit/f3a4601c85c4de78caa4f12fefd64067fd83dbe8and then:
https://github.com/pfsense/pfsense/commit/58ee84b4b2f9daba87e44abf663026c6266a7cd8
and
https://github.com/pfsense/pfsense/commit/793299b8f5bdc0fd167093cc5ab9f3f30f0d77acHave you done this?:
https://doc.pfsense.org/index.php/Tuning_and_Troubleshooting_Network_Cards#Intel_igb.284.29_and_em.284.29_CardsSteve
-
Ok, so just to be sure you did add:
kern.ipc.nmbclusters="131072" hw.em.num_queues=1
to /boot/loader.conf.local?
Steve
-
yes that is correct… i did that
-
i did that in the new file l/boot/loader.conf.local and these 2 lines are in that
kern.ipc.nmbclusters="131072"
hw.em.num_queues=1 -
@stephenw10 the settings i do with bridge is oke?
WAN and servers behind the DMZ are in the same range so i have bridged WAN with DMZ. All servers should be available for hosting services.
WAN cable in WAN connection PFSENSE box (bridge)–> DMZ cable to switch where servers are behind
-
This is the situation now
-
So that's 3 separate firewalls all configured similarly only one of which is pfSense?
Are you using static IPs?Steve
-
Correct
Static ips and several firewall all wan to dmz bridge and only one is pfsense.
When you setup a bridge in pfsense do you need advanced options setup or somerhing ?
-
No nothing special.
You can change where bridge filtering is applied in your case I don't think that would be neccessary. I assume you have WAN set as DHCP, DMZ set as none and bridge0 not assigned?Steve
-
Can the STP setting help when i put that obn WAN and DMZ and give WAN a lower priority?
No Wan is static ip (for example 20.20.20.2 ) then DMZ is none with Bridge0 and all servers behind them are in the range (also static 20.20.20.x)
ifconfig looks like this: (changed ip to 20.20.20.2) em0(wan) em1(dmz) em2 not used em3(lokal ipadres for backup inlog)
em0: flags=8943 <up,broadcast,running,promisc,simplex,multicast>metric 0 mtu 1500
options=4209b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic,vlan_hwtso>ether f4:6d:04:9e:36:d0
inet 20.20.20.2 netmask 0xffff0000 broadcast 20.20.255.255
inet6 fe80::f66d:4ff:fe9e:36d0%em0 prefixlen 64 scopeid 0x1
nd6 options=1 <performnud>media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
em1: flags=8943 <up,broadcast,running,promisc,simplex,multicast>metric 0 mtu 1500
options=4209b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic,vlan_hwtso>ether f4:6d:04:9e:36:d1
inet6 fe80::f66d:4ff:fe9e:36d1%em1 prefixlen 64 scopeid 0x2
nd6 options=1 <performnud>media: Ethernet autoselect (100baseTX <full-duplex>)
status: active
em2: flags=8c02 <broadcast,oactive,simplex,multicast>metric 0 mtu 1500
options=4219b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,tso4,wol_magic,vlan_hwtso>ether f4:6d:04:9e:36:d2
media: Ethernet autoselect
status: no carrier
em3: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
options=4209b <rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic,vlan_hwtso>ether f4:6d:04:9e:36:d3
inet 10.1.1.191 netmask 0xffffff00 broadcast 10.1.1.255
inet6 fe80::f66d:4ff:fe9e:36d3%em3 prefixlen 64 scopeid 0x4
nd6 options=1 <performnud>media: Ethernet autoselect (10baseT/UTP <full-duplex>)
status: active
plip0: flags=8810 <pointopoint,simplex,multicast>metric 0 mtu 1500
enc0: flags=0<> metric 0 mtu 1536
pflog0: flags=100 <promisc>metric 0 mtu 33192
lo0: flags=8049 <up,loopback,running,multicast>metric 0 mtu 16384
options=3 <rxcsum,txcsum>inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x8
nd6 options=3 <performnud,accept_rtadv>pfsync0: flags=0<> metric 0 mtu 1460
syncpeer: 224.0.0.240 maxupd: 128 syncok: 1
bridge0: flags=8843 <up,broadcast,running,simplex,multicast>metric 0 mtu 1500
ether 02:7c:9a:3a:65:00
id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200
root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
member: em1 flags=143 <learning,discover,autoedge,autoptp>ifmaxaddr 0 port 2 priority 128 path cost 2000000
member: em0 flags=143 <learning,discover,autoedge,autoptp>ifmaxaddr 0 port 1 priority 128 path cost 2000000</learning,discover,autoedge,autoptp></learning,discover,autoedge,autoptp></up,broadcast,running,simplex,multicast></performnud,accept_rtadv></rxcsum,txcsum></up,loopback,running,multicast></promisc></pointopoint,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic,vlan_hwtso></up,broadcast,running,simplex,multicast></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,tso4,wol_magic,vlan_hwtso></broadcast,oactive,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic,vlan_hwtso></up,broadcast,running,promisc,simplex,multicast></full-duplex></performnud></rxcsum,txcsum,vlan_mtu,vlan_hwtagging,vlan_hwcsum,wol_magic,vlan_hwtso></up,broadcast,running,promisc,simplex,multicast> -
Okay,
i went back to the orginal rc.newwanip
did only this:
https://github.com/pfsense/pfsense/commit/f3a4601c85c4de78caa4f12fefd64067fd83dbe8
and added boot/loader.conf.local and these 2 lines are in that
kern.ipc.nmbclusters="131072"
hw.em.num_queues=1Rebooted.
Under Firewall/ NAT i checked:
Static route filtering Bypass firewall rules for traffic on the same interfac
IP Do-Not-Fragment compatibility Clear invalid DF bits instead of dropping the packetsThe servers are timing out a lot less now.
Maybe once in 30 pings sometimes 2 pings in a row…
What is see in the logs at that times are tcp:fa / tcp:a from DMZ packages , has that anything to do with that?
for example:
block
Jan 16 14:14:03 DMZ serverip:80 ipadres:50155 TCP:A