LCDProc 0.5.4-dev
-
I've now been running 0.54 versions of LCDd and sdeclcd for 24hrs and am still experiencing the same LCDd CPU usage lock out. So I think we can rule out going to 0.55 as a problem.
Interestingly I was able to observe it happening this afternoon and it ramps up slowly as if it's looping around creating steadily more and more processes until it hits 100% cpu.One way to test this would be to compile the old driver against 0.55 and run that. I don't have a suitable compile environment setup at the moment though.
Steve
-
Steve,
how many instances of the client were running at that time? Do you have any log that evidence problems or concurrent clients running in the same time?Thanks,
Michele -
Hmm, I'm not sure what you mean.
There were two distinct problems I experienced.
Firstly where multiple copies of the client ended up running. This happened immediately after either rebooting or restarting the service.
Secondly where LCDd ends up using 100% cpu. Just one client can be running when this happens.
I don't have much by way of logging. Can we increase the logging level in lcdd.conf?Steve
-
Hmm, I'm not sure what you mean.
There were two distinct problems I experienced.
Firstly where multiple copies of the client ended up running. This happened immediately after either rebooting or restarting the service.Ok, but thanks to the last change the "second client" was stopped after 2 more attempts to connect, I guess.
Secondly where LCDd ends up using 100% cpu. Just one client can be running when this happens.
I don't have much by way of logging. Can we increase the logging level in lcdd.conf?Ok, that's awesome. At least we know that the issue is not related to multiple instances of the client running, which in any case should not give any problem (since LCDd is made to support different clients in the same time), but for sure it is not a "resoruce leak" because of the PHP client bothering the system…
Thanks,
Michele -
Yes I agree two distinct situations.
The first should have been solved by your recent update, thanks!
Though as you say LCDd can support multiple clients. Interesting though that this doesn't arise with any other driver even though it should not happen.Steve
-
I just use the HD44780 in the driver list, and by connecting the display, the system log shows "LCD2USB device found"
The display is a little slow, and works best by refresh frequency on 5 or 10 sec.Hi JPSB,
mmhhh… seems strange... how many screens you have active? Did you have the same problem also with the previous version?Thanks,
MicheleHi Michele
Yes I had the same problem with last version.
I only have "Interface Traffic" and "Load" running.
But the new version with options to change contrast, etc. makes the display is running perfectly.
I have not had any luck getting the new driver "hd44780 Fast" to run.
But as I said, it runs perfectly with the other changes.
So once again, many thanks for a fantastic job.See my youtube video of the display:
http://www.youtube.com/watch?v=moL-x1HpPew&feature=autoplay&list=ULmoL-x1HpPew&lf=mfu_in_order&playnext=46Hi I having problem with the display running on the alix2d13 hardware.
After about 12 hours, the CPU load runs at 100%, and Pfsense freezer.
I have tried with multiple modules loaded but it makes no difference.PFSense 2.0.1
alix2d13
4Gb CF-card
MiniPCI vpn1411 encryption accelerator
U204FB-A1 20x4 Display -
What if we roll back to 0.5.4 (now the package works with 0.5.5)??
-
I've been testing using 0.54 versions of LCDd and sdeclcd.so and there is no difference. As I type this I'm unable to access my box.
The fact that jpsb is experiencing a similar problem with a different driver is alarming.I haven't tried going back to pfSense 2.0 yet. :-\
Still testing…
Steve
-
Steve,
can you try to use the LCDproc package (I mean not the "-dev" package) and see if the problem occurs?I guess with the "LCDproc" package you didn't have any problem or you were not running it?
Thanks,
Michele -
I don't know if it can help, but some time ago I had a endless startup processes from mailscanner that exausted machine resources every boot.
I noted that at bootup, mailscanner startup was called several times.
May be there is something related with multiple lcdproc scripts/prccess opened.
-
I don't know if it can help, but some time ago I had a endless startup processes from mailscanner that exausted machine resources every boot.
I noted that at bootup, mailscanner startup was called several times.
May be there is something related with multiple lcdproc scripts/prccess opened.Thanks Marcello, but for what I understood it is the only LCDd process that after a certain amount of time eats all the resources…
-
I never ran the original LCdproc package (except while trying to develop my own package and then only for a few hours) because it never included the sdeclcd driver.
Before you added the driver to the LCDproc-dev package all firebox users were running a manual installation that consisted of:
LCDd 0.53
lcdproc client with manual command line options for screens.
The old sdeclcd driver.
A simple startup script that ran the server and client once from /usr/local/etc/rc.dI never saw this crash out on any box. It was distributed as a tarball with an install script. Here.
I never tested this with pfSense 2.0.1.
Steve
-
Hi I having problem with the display running on the alix2d13 hardware.
U204FB-A1 20x4 DisplayWhat is the driver for this LCD?
-
Before you added the driver to the LCDproc-dev package all firebox users were running a manual installation that consisted of:
LCDd 0.53
lcdproc client with manual command line options for screens.
The old sdeclcd driver.
A simple startup script that ran the server and client once from /usr/local/etc/rc.dCan I install the old driver and run the LCDproc-dev scripts against it? Would it work this way? Or maybe replace LCDd instead?
I thinking is the problem is with either the driver or LCDd not the client scripts. I've never seen an issue with the client, but everytime the display has stopped working for me, the LCDd process was at 100%.
-
Just what I'm going to try after work.
You will need both LCDd and sdeclcd.so from the tarball. I've never tried running it since the 2.0.1 update but I see no reason why anything should have changed.Steve
Edit: No compatibility issues, testing now.
-
I've been testing using 0.54 versions of LCDd and sdeclcd.so and there is no difference.
New test driver:
https://github.com/downloads/fmertz/sdeclcd/sdeclcd.so
I removed the call to the process scheduler. Give it a try…
-
Steve,
I watched the LCDd.conf in the tarball and I found no one difference that could cause this… BUT in the same time probably I had the same issue on my secondary machine.
The machine is running the screens: Uptime, Load, States, Mbuf and Interface Traffic (WAN).Can you please select only the Interface traffic (WAN) and tell me if it hangs again? So we exclude one screen...
Thanks,
Michele -
I removed the call to the process scheduler. Give it a try…
Ah, that sounds interesting.
Can you please select only the Interface traffic (WAN) and tell me if it hangs again? So we exclude one
You want me to run only the Interface Traffic screen? Currently I'm running Uptime and Time.
Too many tests, not enough boxes! :P
Steve
-
hehe! sorry buddy, if some watchguard representative sends me a couple of Fireboxes I can test them also! :D
-
New test driver:
https://github.com/downloads/fmertz/sdeclcd/sdeclcd.so
I removed the call to the process scheduler. Give it a try…
Downloaded this driver and going to try it. Left everything else unchanged from the .9 dev package and will see if the driver alone makes any difference in the morning. If the driver doesn't help, I will restore the original .9 driver and change to LCDd 0.53 in stephenw10's manual package. Seems a methodical approach should help me narrow this down.
I haven't had any resource issues other than LCDd locking CPU to 100% until I kill it. Even in my box with only 256M, I still have over 128M free and no swap in use. In fact, today it ran for 8 hours at 100% while I was at work and continued to route and firewall properly.
load averages: 10.06, 9.81, 9.30 101 processes: 13 running, 76 sleeping, 12 waiting CPU: 20.4% user, 0.0% nice, 78.6% system, 1.0% interrupt, 0.0% idle Mem: 62M Active, 12M Inact, 35M Wired, 25M Buf, 125M Free Swap: 512M Total, 512M Free PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 12019 nobody 74 r30 3368K 1496K RUN 528:54 100.00% LCDd
Lastly math was off in a previous post, my failures all seem to start at around 9 hours (+/- 1 hour) of uptime (not 16 as previously reported).
I will report status in the morning and with any luck the new driver resolves this.