pfSense 2.6 problem with zombie processes
-
@netblues Please show config of qemu-guest-login - /etc/rc.conf.local
-
root: cat /etc/rc.conf.local
qemu_guest_agent_enable="YES"
qemu_guest_agent_flags="-d -v -l /var/log/qemu-ga.log"
virtio_console_load="YES" -
@gofaizen said in pfSense 2.6 problem with zombie processes:
qemu_guest_agent_flags="-d -v -m virtio-serial -l /var/log/qemu-ga.log -p /dev/ttyV0.2"
Where did you get this?
-
@netblues I have the same
-
@netblues This is command line from linux.
I have tried qemu_guest_agent_flags="-d -v -l /var/log/qemu-ga.log" and qemu_guest_agent_flags="-d -v -m virtio-serial -l /var/log/qemu-ga.log -p /dev/ttyV0.2"
Same effect - zombie process every minute -
@gofaizen said in pfSense 2.6 problem with zombie processes:
@netblues This is command line from linux.
I have tried qemu_guest_agent_flags="-d -v -l /var/log/qemu-ga.log" and qemu_guest_agent_flags="-d -v -m virtio-serial -l /var/log/qemu-ga.log -p /dev/ttyV0.2"
Same effect - zombie process every minuteThere is no -p /dev/ttyV0.2 in my config.
-
Hmm, it's unclear which of those things is a symptom.
Is the widget throwing that error because qemuguest is continually trying to access the disk?
Or is the agent unable to read the disk status because the widget is doping somthing wrong?
Or maybe both are failing because of some other issue.If you disable the qemu agent do you still see the php error from the widget?
Steve
-
@stephenw10
Widget throws error in random order.
Error in widget and qemu-guest-agent not in any connect. When I disable qemu-guest-agent widget continues throwing errors in random order. It can work whole day in normal mode, or can throw error every 10 minutes. And it throws error every time after reboot.
qemu-guest-agent make zombie every time after getting data: {"execute":"guest-get-fsinfo"} message from oVirt. -
Hmm, well that sounds more like both things are failing because of something else preventing access to the filesystem.
If you run
/bin/df -hT
manually at the command line repeatedly does it ever fail?Can we see the output of:
/bin/df -hT --libxo=json
Though that too might need to be run until it fails. That's what the widget is trying to do and choking on the output.Steve
-
@stephenw10 Yes, I can run df -hT manually. I have checked this in first order.
": [{"name":"/dev/ufsid/60618bb1ebc69388","type":"ufs ","blocks":"5.5G","used":"2.0G","available":"3.1G","used-percent":39,"mounted-on":"/"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/dev"}, {"name":"tmpfs","type":"tmpfs","blocks":"4.0M","used":"152K","available":"3.9M","used-percent":4,"mounted-on":"/var/run"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/var/dhcpd/dev"}]} -
But if you run /bin/df -hT manually at the command line repeatedly does it ever fail? Like after 100 tries?
It will likely only fails as often as the widget does. Or if you happen to run it when the qemu agent is trying to access the filesystem also.The JSON output there seems to be missing the initial terms. I expect it to read like:
[22.05-DEVELOPMENT][admin@plusdev-2.stevew.lan]/root: /bin/df -hT --libxo=json {"storage-system-information": {"filesystem": [{"name":"/dev/ufsid/626069f74a9f0e6e","type":"ufs ","blocks":"9.2G","used":"1.5G","available":"7.0G","used-percent":18,"mounted-on":"/"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/dev"}, {"name":"tmpfs","type":"tmpfs","blocks":"4.0M","used":"112K","available":"3.9M","used-percent":3,"mounted-on":"/var/run"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/var/dhcpd/dev"}]} }
Steve
-
@stephenw10
{"storage-system-information": {"filesystem": [{"name":"/dev/ufsid/60618bb1ebc69388","type":"ufs ","blocks":"5.5G","used":"2.0G","available":"3.1G","used-percent":39,"mounted-on":"/"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/dev"}, {"name":"tmpfs","type":"tmpfs","blocks":"4.0M","used":"152K","available":"3.9M","used-percent":4,"mounted-on":"/var/run"}, {"name":"devfs","type":"devfs","blocks":"1.0K","used":"1.0K","available":"0B","used-percent":100,"mounted-on":"/var/dhcpd/dev"}]}
}
Possibly I have copied part of output.Tried to run /bin/df -hT --libxo=json in loop 1000 times. All times were successful.
#!/bin/sh
for i in
seq 1 1000
do
/bin/df -hT --libxo=json
done -
Hmm. If you remove the qemu-agent can I assume this goes away?
-
@stephenw10
When I stopped qemu-ga service zombie processes died.
I have checked it several times. But I need qemu-ga in my oVirt installation. -
Hmm, well can you configure it not to query the disk status? I would assume that also solves it.
Can you test qemu-agent in a FreeBSD 12.3 install? Hard to see what but it might be something in base that's changed.
Steve
-
@stephenw10
I am trying to disable query for file system status in qemu-ga. -
@stephenw10
With this parameters -d -v -l /var/log/qemu-ga.log -b "guest-get-fsinfo" qemu-ga does not generate zombies.
-b "guest-get-fsinfo" - means - blacklist guest-get-fsinfo command -
Mmm, OK. And does that still provide the info you need?
-
@stephenw10
qemu-ga provides information about interfaces, logged in users and fqdn. I cant see information about guest agent version, OS, timezone, architecture and file systems (i have disabled file system info by myself). -
And that's sufficient?