Listen queue overflow
-
Running pfBlockerNG-devel 2.2.5_32 on 2.4.5-RELEASE (amd64).
I've noticed my logs are filling up with messages along the lines of:
May 25 07:24:33 192.168.64.1 kernel: sonewconn: pcb 0xfffff800663993a0: Listen queue overflow: 193 already in queue awaiting acceptance (605 occurrences) May 25 07:25:39 192.168.64.1 kernel: sonewconn: pcb 0xfffff800663993a0: Listen queue overflow: 193 already in queue awaiting acceptance (5 occurrences) May 25 07:26:39 192.168.64.1 kernel: sonewconn: pcb 0xfffff800663993a0: Listen queue overflow: 193 already in queue awaiting acceptance (915 occurrences)
I think this is related to the ports DNSBL is listening on:
[2.4.5-RELEASE][root@x]/root: netstat -Lan Current listen queue sizes (qlen/incqlen/maxqlen) Proto Listen Local Address tcp4 0/0/128 10.99.99.1.443 tcp4 193/0/128 *.8443 tcp4 163/0/128 *.8081 tcp4 0/0/128 127.0.0.1.953 tcp4 0/0/128 *.53 tcp6 0/0/128 *.53 tcp6 0/0/128 *.80 tcp4 0/0/128 *.80 tcp6 0/0/128 *.443 tcp4 0/0/128 *.443 tcp4 0/0/128 *.22 tcp6 0/0/128 *.22
Any pointers to what's going on?
-
@bigsy I see the same errors...
-
This post is deleted! -
@jdeloach FWIW I am running the latest versions. PFS+ 21.05 Release (on a 2100) and pfBlockerNG-devel 3.0.0_16.
While I was troubleshooting a connectivity issue on one of my two WANs (PPPoE) I noticed the Listen Queue errors. I'm not aware that they were related to my issue but I did switch from DNSBL Unbound Python Mode (which I understand is beta) to Unbound Mode nd rebooted. Currently don't see the error appear and connectivity is restored.
This change also seemed to have stopped my disk volume from slowly filling up. While using Unbound Python, changing log settings (or even clearing logs) didn't help. Only a reboot would bring the disk usage back to normal. I tried to locate the directory where the storage was being hogged but curiously, I could not find that either. (Note my skills are 'fair', certainly not 'expert'. ) df and du sorted by largest etc didn't help.
Currently all is good so ... all the above... FWIW..
Tnx! -
@jdeloach I posted that message in May 2020.