Viewing dashboard makes UI very sluggish (php)
-
It's hard to say what might be causing that to be so resource-intensive.
I am watching the dashboard on a 1GHz via right now that is sync'd up to about Tuesday afternoon, and it only uses 6-9% CPU when watching the dashboard. I've even got the traffic graphs, system info, and a bunch of other widgets on.
It might be something specific to that hardware.
-
I'm also using the Atom D510 and on 32bit version i get only about 2% cpu usage from the php service (full dashboard & no traffic - wan disconnected).
i'll try to test 64bit later. -
I have a 64-bit VM and it's sitting at about 0-2%, though it's running on an i5-750 ;D
-
Mine looks OK
2.0-BETA4 (i386) built on Tue Aug 17 04:34:37 EDT 2010 FreeBSD 8.1-RELEASE on a Alix board
last pid: 62263; load averages: 0.11, 0.07, 0.02 up 0+03:27:01 17:15:02
44 processes: 1 running, 43 sleeping
CPU: 0.5% user, 0.0% nice, 2.5% system, 0.0% interrupt, 97.0% idle
Mem: 21M Active, 14M Inact, 28M Wired, 164K Cache, 34M Buf, 171M Free
Swap:PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
29510 root 1 76 0 33568K 17584K accept 0:04 0.98% php
35007 root 1 76 20 3656K 1364K wait 0:05 0.00% sh
21118 root 1 44 0 3316K 1296K select 0:04 0.00% apinger
24379 root 1 44 0 6556K 4600K kqread 0:02 0.00% lighttpd
27440 nobody 1 44 0 4528K 2440K select 0:01 0.00% dnsmasq
38711 dhcpd 1 44 0 3316K 2188K select 0:01 0.00% dhcpd
50466 root 13 64 20 4940K 1384K nanslp 0:01 0.00% check_reload_status
34159 root 1 44 0 7992K 3520K select 0:00 0.00% sshd
24443 root 1 76 0 31520K 9168K wait 0:00 0.00% php
17831 root 1 44 0 5912K 1956K bpf 0:00 0.00% tcpdump
17190 root 1 44 0 3448K 1392K select 0:00 0.00% syslogd
16074 _dhcp 1 44 0 3316K 1392K select 0:00 0.00% dhclient
7378 _ntp 1 44 0 3316K 1328K select 0:00 0.00% ntpd
42883 root 1 47 0 3404K 1388K nanslp 0:00 0.00% cron
2185 root 1 76 0 3684K 1572K wait 0:00 0.00% login
13532 root 1 44 0 4696K 2320K pause 0:00 0.00% tcsh
21441 root 1 44 0 4480K 1640K piperd 0:00 0.00% rrdtool
56463 root 1 44 0 3656K 1444K wait 0:00 0.00% sh
50200 root 1 73 0 3316K 1020K nanslp 0:00 0.00% minicron
7390 root 1 44 0 3316K 1352K select 0:00 0.00% ntpd
7187 root 1 76 0 3316K 1288K select 0:00 0.00% dhclient
2242 root 1 44 0 3316K 960K piperd 0:00 0.00% sshlockout_pf
3361 root 1 76 0 3656K 1404K ttyin 0:00 0.00% sh
2257 root 1 76 0 3656K 1404K wait 0:00 0.00% sh
60576 root 1 44 0 3712K 1912K RUN 0:00 0.00% top
18610 root 1 44 0 3436K 1388K select 0:00 0.00% inetd
18083 root 1 76 0 3316K 820K piperd 0:00 0.00% logger
154 root 1 44 0 1888K 528K select 0:00 0.00% devd
11448 root 1 44 0 5272K 3020K select 0:00 0.00% sshd
40599 root 1 76 20 1564K 576K nanslp 0:00 0.00% sleep
50575 root 1 57 0 3316K 1020K nanslp 0:00 0.00% minicron
51092 root 1 76 0 3316K 1020K nanslp 0:00 0.00% minicron -
Could hyperthreading cause this? I could try disabling it.
-
I doubt it's helping anything. :-)
-
ok. I left it on after reading this post, but if it's not doing anything for me then I'll shut it off tonight.
http://forum.pfsense.org/index.php/topic,26903.msg140173.html#msg140173
-
ok, this is definitely something in the later snapshots, and it's affecting the nanobsd platform too. And it's not my config.
I backed up my config from the full install on Atom hardware and uploaded it to a net5501 running nanobsd 2g. When I boot the slice using the August 2 snapshot and load the dashboard, top shows 1.95% php CPU. When I boot the Aug 20 slice php again hogs all the idle cpu.
-
I updated to a snap from this morning on amd64 and it's still sitting practically idle.
How can you say it's not in your config when you restored your config to the other router? Sure, the slices were different but your config is still the common element.
Try it with a fresh config, not a restore. See if the same thing happens.
-
Both slices use the same config. Same hardware, same config. Once slice loads shows 100% CPU usage while viewing the dashboard, the older slice shows 1.95%. The only changed variable is the snapshot.
-
Try removing all widgets from dashboard and adding them back one by one. Maybe that will help to pinpoint which one of them is causing the issue or if it's the dashboard itself.
-
I removed every widget from the dashboard and it was still sluggish. I don't recall if I then tried saving, navigating away, and then back. Will do so tonight or Monday morning when I load up the other slice.
-
That still doesn't remove something in your config from being part of the issue. Try it again with both slices and a stock configuration first. Or at least a bare minimum configuration to operate.
-
At best it's an interaction between the software and my config. The same config doesn't cause the older snaps to malfunction.
I will try a stock config and let you know what I find.
-
Or, quite possibly, something was enabled but broken in an older snapshot but works now :-)
Either was it is a valid and necessary test to help narrow down the problem.
-
For kicks and grins you can also edit /usr/local/www/javascript/index/ajax.js, go to line 32, and comment out (Put // before) all of the update<whatever>() functions.
I fixed a bug in this commit:
https://rcs.pfsense.org/projects/pfsense/repos/mainline/commits/15c5b5d63710f28284a974902d0771ceefbb5e86That made most of the AJAX updates completely fail to run unless you had the CPU widget loaded.
I should probably wrap a bunch of that code in widgetActive() calls so they only run if you have those particular widgets on. As it is, it tries to update them all whether or not you have the widgets loaded.
An easy way to see if that was the problem in your case would be to go to the Aug 2 snap and add the CPU widget. If your CPU usage spikes, then it's one of those AJAX update calls doing it.
It would still be something in your configuration causing it though, as it doesn't happen either way for me, but it may still help narrow it down.</whatever>
-
So like this?
/*function updateMeters() { * url = '/getstats.php' * * new Ajax.Request(url, { * method: 'get', * onSuccess: function(transport) { * response = transport.responseText || ""; * if (response != "") * stats(transport.responseText); * } * }); * setTimeout('updateMeters()', update_interval); */}
-
Nah, like this:
function stats(x) { var values = x.split("|"); if (values.find(function(value){ if (value == 'undefined' || value == null) return true; else return false; })) return; // updateCPU(values[0]); // updateMemory(values[1]); // updateUptime(values[2]); // updateState(values[3]); // updateTemp(values[4]); // updateDateTime(values[5]); // updateInterfaceStats(values[6]); // updateInterfaces(values[7]); // updateGatewayStats(values[8]); }
-
ok, I tried that and it made no difference either before or after a reboot.
-
How about if you comment out line 19 of that same file?
// setTimeout('updateMeters()', update_interval);
That should stop all AJAX updates, not just the javascript portion.
Or you could experiment with /usr/local/www/includes/functions.inc.php in the get_stats() function to see which one of the function calls there might be causing a delay. To test in there, you need to make sure that a value is still in the variable, even if it's blank, like so:
// $stats['mem'] = mem_usage(); $stats['mem'] = 0;