pfSense 2.6 problem with zombie processes
-
@stephenw10 Yes, I can run df -hT manually. I have checked this in first order.
": [{"name":"/dev/ufsid/60618bb1ebc69388","type":"ufs ","blocks":"5.5G","used":"2.0G","available":"3.1G","used-percent":39,"mounted-on":"/"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/dev"}, {"name":"tmpfs","type":"tmpfs","blocks":"4.0M","used":"152K","available":"3.9M","used-percent":4,"mounted-on":"/var/run"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/var/dhcpd/dev"}]} -
But if you run /bin/df -hT manually at the command line repeatedly does it ever fail? Like after 100 tries?
It will likely only fails as often as the widget does. Or if you happen to run it when the qemu agent is trying to access the filesystem also.The JSON output there seems to be missing the initial terms. I expect it to read like:
[22.05-DEVELOPMENT][admin@plusdev-2.stevew.lan]/root: /bin/df -hT --libxo=json {"storage-system-information": {"filesystem": [{"name":"/dev/ufsid/626069f74a9f0e6e","type":"ufs ","blocks":"9.2G","used":"1.5G","available":"7.0G","used-percent":18,"mounted-on":"/"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/dev"}, {"name":"tmpfs","type":"tmpfs","blocks":"4.0M","used":"112K","available":"3.9M","used-percent":3,"mounted-on":"/var/run"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/var/dhcpd/dev"}]} }
Steve
-
@stephenw10
{"storage-system-information": {"filesystem": [{"name":"/dev/ufsid/60618bb1ebc69388","type":"ufs ","blocks":"5.5G","used":"2.0G","available":"3.1G","used-percent":39,"mounted-on":"/"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/dev"}, {"name":"tmpfs","type":"tmpfs","blocks":"4.0M","used":"152K","available":"3.9M","used-percent":4,"mounted-on":"/var/run"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/var/dhcpd/dev"}]}
}
Possibly I have copied part of output.Tried to run /bin/df -hT --libxo=json in loop 1000 times. All times were successful.
#!/bin/sh
for i in
seq 1 1000
do
/bin/df -hT --libxo=json
done -
Hmm. If you remove the qemu-agent can I assume this goes away?
-
@stephenw10
When I stopped qemu-ga service zombie processes died.
I have checked it several times. But I need qemu-ga in my oVirt installation. -
Hmm, well can you configure it not to query the disk status? I would assume that also solves it.
Can you test qemu-agent in a FreeBSD 12.3 install? Hard to see what but it might be something in base that's changed.
Steve
-
@stephenw10
I am trying to disable query for file system status in qemu-ga. -
@stephenw10
With this parameters -d -v -l /var/log/qemu-ga.log -b "guest-get-fsinfo" qemu-ga does not generate zombies.
-b "guest-get-fsinfo" - means - blacklist guest-get-fsinfo command -
Mmm, OK. And does that still provide the info you need?
-
@stephenw10
qemu-ga provides information about interfaces, logged in users and fqdn. I cant see information about guest agent version, OS, timezone, architecture and file systems (i have disabled file system info by myself). -
And that's sufficient?
-
@stephenw10
I would like to get all info from qemu-ga. Like from Linux VM.
FreeBSD qemu-ga has strange behavior - in default config (when -m is not present) I see method - isa-serial. But in VM config - virtio-serial. And when I set virtio-serial in qemu-ga config, qemu-ga can't get any request from oVirt.qemu-ga -d -v -l /var/log/qemu-ga.log -m virtio-serial - does not work
qemu-ga -d -v -l /var/log/qemu-ga.log - works with some restrictions in info -
Ok, the next thing I would test here is whether it works as expected in FreeBSD 12.3.
Is this an upstream regression or something we are doing in pfSense specifically.Steve
-
@stephenw10
Today I will try FreeBSD 12 on ovirt.
There is qcow image on download.freebsd.org
I will try clear iso and qcow image. -
@stephenw10
I tried qemu-ga in FreeBSD 12.3 - working as expected. -
Hmm, so something different about our filesystem perhaps?
Permissions issue?
Same qemu-agent version?
There can't be much different there.
-
@stephenw10
On FreeBSD 12.3 - qemu-ga -V
QEMU Guest Agent 5.0.1On pfSense - qemu-ga -V
QEMU Guest Agent 5.0.1Command line on FreeBSD - qemu-ga -m virtio-serial -d -v -l /var/log/qemu-ga.log
Command line on pfSense - /usr/local/bin/qemu-ga -d -v -l /var/log/qemu-ga.log -b guest-get-fsinfo
-
@stephenw10
In FreeBSD I have run /usr/local/bin/qemu-ga -d -v -l /var/log/qemu-ga.log -b guest-get-fsinfo.
And everything works fine. -
@stephenw10
Everything is similar in FreeBSD and pfSensepfSense :
qemu_guest_agent_enable="YES"
qemu_guest_agent_flags="-d -v -l /var/log/qemu-ga.log -b "guest-get-fsinfo""/usr/local/bin/qemu-ga -d -v -l /var/log/qemu-ga.log -b guest-get-fsinfo -D
[general]
daemon=true
method=isa-serial
path=/dev/vtcon/org.qemu.guest_agent.0
logfile=/var/log/qemu-ga.log
pidfile=/var/run/qemu-ga.pid
statedir=/var/run
verbose=true
retry-path=false
blacklist=guest-get-fsinfols -l /dev/vtcon
total 0
lrwxr-xr-x 1 root wheel 10 Apr 25 15:24 com.redhat.spice.0 -> ../ttyV0.3
lrwxr-xr-x 1 root wheel 10 Apr 25 15:24 org.qemu.guest_agent.0 -> ../ttyV0.2
lrwxr-xr-x 1 root wheel 10 Apr 25 15:24 ovirt-guest-agent.0 -> ../ttyV0.1ls -l /dev/ttyV*
crw------- 1 root wheel 0x34 Apr 25 15:24 /dev/ttyV0.1
crw------- 1 root wheel 0x35 Apr 28 14:34 /dev/ttyV0.2
crw------- 1 root wheel 0x36 Apr 25 15:24 /dev/ttyV0.3FreeBSD :
qemu_guest_agent_enable="YES"
qemu_guest_agent_flags="-d -v -l /var/log/qemu-ga.log -b guest-get-fsinfo"/usr/local/bin/qemu-ga -d -v -l /var/log/qemu-ga.log -b guest-get-fsinfo -D
[general]
daemon=true
method=isa-serial
path=/dev/vtcon/org.qemu.guest_agent.0
logfile=/var/log/qemu-ga.log
pidfile=/var/run/qemu-ga.pid
statedir=/var/run
verbose=true
retry-path=false
blacklist=guest-get-fsinfols -l /dev/vtcon
total 0
lrwxr-xr-x 1 root wheel 10 Apr 28 14:07 com.redhat.spice.0 -> ../ttyV0.3
lrwxr-xr-x 1 root wheel 10 Apr 28 14:07 org.qemu.guest_agent.0 -> ../ttyV0.2
lrwxr-xr-x 1 root wheel 10 Apr 28 14:07 ovirt-guest-agent.0 -> ../ttyV0.1ls -l /dev/ttyV*
crw------- 1 root wheel 0x33 Apr 28 14:07 /dev/ttyV0.1
crw------- 1 root wheel 0x34 Apr 28 14:30 /dev/ttyV0.2
crw------- 1 root wheel 0x35 Apr 28 14:07 /dev/ttyV0.3 -
Ok, lets get a bug report open to track this.
What steps are required to replicate this?