CPU loaded at 100% and hangs pfsense
-
stephenw10, device polling me off. If I turn it on it will also be loaded 100% CPU
-
Do you still have so many processes running after the reboot?
Also if you could edit your post above to include the close code tag it would make this thread much easier to read. ;)
Steve
-
Do you still have so many processes running after the reboot?
Also if you could edit your post above to include the close code tag it would make this thread much easier to read. ;)
Steve
Processes become less, after I removed the unnecessary script №1.2 and №1.3 from NCAT.
But that when the file is loaded CPU at 100%, and blame ng_queueUSER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 13 70.8 0.0 0 8 ?? DL 1:31PM 14:29.18 ng_queue root 11 15.0 0.0 0 8 ?? RL 1:31PM 236:59.29 idle root 12 0.9 0.0 0 128 ?? WL 1:31PM 17:15.84 intr root 60142 0.9 1.5 53596 15552 ?? S 6:15PM 0:18.32 /usr/local/bin/php root 1490 0.7 1.5 53596 15552 ?? S 6:13PM 0:18.99 /usr/local/bin/php root 0 0.0 0.0 0 64 ?? DLs 1:31PM 0:10.28 kernel root 1 0.0 0.0 1888 460 ?? ILs 1:31PM 0:00.01 /sbin/init -- root 2 0.0 0.0 0 8 ?? DL 1:31PM 0:01.18 g_event root 3 0.0 0.0 0 8 ?? DL 1:31PM 0:01.55 g_up root 4 0.0 0.0 0 8 ?? DL 1:31PM 0:01.06 g_down root 5 0.0 0.0 0 8 ?? DL 1:31PM 0:00.00 crypto root 6 0.0 0.0 0 8 ?? DL 1:31PM 0:00.00 crypto returns root 7 0.0 0.0 0 8 ?? DL 1:31PM 0:00.15 fdc0 root 8 0.0 0.0 0 8 ?? DL 1:31PM 0:00.00 sctp_iterator root 9 0.0 0.0 0 8 ?? DL 1:31PM 0:03.69 pfpurge root 10 0.0 0.0 0 8 ?? DL 1:31PM 0:00.00 audit root 14 0.0 0.0 0 8 ?? DL 1:31PM 0:33.35 yarrow root 15 0.0 0.0 0 8 ?? DL 1:31PM 0:00.00 xpt_thrd root 16 0.0 0.0 0 8 ?? DL 1:31PM 0:00.02 pagedaemon root 17 0.0 0.0 0 8 ?? DL 1:31PM 0:00.00 vmdaemon root 18 0.0 0.0 0 8 ?? DL 1:31PM 0:00.00 pagezero root 19 0.0 0.0 0 8 ?? DL 1:31PM 0:00.03 idlepoll root 20 0.0 0.0 0 8 ?? DL 1:31PM 0:00.13 bufdaemon root 21 0.0 0.0 0 8 ?? DL 1:31PM 0:00.10 vnlru root 22 0.0 0.0 0 8 ?? DL 1:31PM 0:01.35 syncer root 23 0.0 0.0 0 8 ?? DL 1:31PM 0:00.11 softdepflush root 39 0.0 0.0 0 8 ?? DL 1:31PM 0:00.11 md0 root 254 0.0 0.1 3408 1156 ?? INs 1:31PM 0:00.03 /usr/local/sbin/check_reload_status root 256 0.0 0.1 3408 1032 ?? IN 1:31PM 0:00.00 check_reload_status: Monitoring daemon of check_reload_status root 267 0.0 0.1 1888 540 ?? Is 1:31PM 0:00.00 /sbin/devd root 1712 0.0 0.2 4948 2544 ?? Ss 4:39PM 0:00.85 /usr/sbin/syslogd -c -c -l /var/dhcpd/var/run/log -f /var/etc/syslog.conf root 2712 0.0 1.5 53596 15432 ?? S 5:54PM 0:13.80 /usr/local/bin/php root 4825 0.0 0.1 3316 1356 ?? SNs 1:33PM 0:11.50 /usr/local/sbin/apinger -c /var/etc/apinger.conf root 9353 0.0 0.1 3316 1240 ?? Is 1:31PM 0:00.00 dhclient: rl0 priv (dhclient) root 9853 0.0 1.5 53596 15436 ?? S 5:53PM 0:14.68 /usr/local/bin/php root 12729 0.0 0.1 3404 1348 ?? I 1:33PM 0:00.00 cron: running job (cron) root 12921 0.0 0.1 3656 1340 ?? Is 1:33PM 0:00.01 /bin/sh /usr/scripts/start-keeper.sh root 14221 0.0 0.1 3656 1440 ?? SN 1:33PM 0:04.79 sh /usr/scripts/mpd-keeper _dhcp 15600 0.0 0.1 3316 1376 ?? Is 1:31PM 0:00.00 dhclient: rl0 (dhclient) root 16742 0.0 0.3 7992 3520 ?? RNs 4:50PM 0:00.71 sshd: admin@pts/0 (sshd) nobody 18036 0.0 0.3 5556 2636 ?? S 4:50PM 0:05.32 /usr/local/sbin/dnsmasq --local-ttl 1 --all-servers --dns-forward-max=5000 --cache-size=10000 root 18250 0.0 0.4 9488 4300 ?? SNs 1:33PM 0:00.49 /usr/local/sbin/mpd5 -b -k -d /var/etc -f mpd_opt1.conf -p /var/run/l2tp_opt1.pid -s ppp l2tpclient root 22166 0.0 0.3 5176 2628 ?? Ss 1:31PM 0:00.48 /usr/sbin/hostapd -B /var/etc/hostapd_ath0_wlan0.conf dhcpd 24292 0.0 0.6 8436 6152 ?? Ss 4:50PM 0:01.01 /usr/local/sbin/dhcpd -user dhcpd -group _dhcp -chroot /var/dhcpd -cf /etc/dhcpd.conf ste0 ath0_wlan0 root 24977 0.0 0.1 3532 1208 ?? Is 4:50PM 0:00.02 /usr/local/sbin/sshlockout_pf 15 root 32707 0.0 0.2 4496 1932 ?? SN 4:51PM 0:00.10 /usr/local/bin/rrdtool - root 37039 0.0 0.1 3436 1540 ?? Is 1:31PM 0:01.23 /usr/sbin/inetd -wW -R 0 -a 127.0.0.1 /var/etc/inetd.conf root 43281 0.0 0.1 3404 1360 ?? Ss 1:32PM 0:00.18 /usr/sbin/cron -s root 43753 0.0 0.1 3316 992 ?? Is 1:32PM 0:00.03 /usr/local/bin/minicron 240 /var/run/ping_hosts.pid /usr/local/bin/ping_hosts.sh root 43904 0.0 0.1 3316 992 ?? Is 1:32PM 0:00.00 /usr/local/bin/minicron 3600 /var/run/expire_accounts.pid /etc/rc.expireaccounts root 43930 0.0 0.3 5272 3216 ?? INs 4:50PM 0:00.00 /usr/sbin/sshd root 44525 0.0 0.1 3316 960 ?? Is 1:32PM 0:00.00 /usr/local/bin/minicron 86400 /var/run/update_alias_url_data.pid /etc/rc.update_alias_url_data root 45349 0.0 0.3 6588 3440 ?? SN 4:50PM 0:05.36 /usr/local/sbin/lighttpd -f /var/etc/lighty-webConfigurator.conf nobody 45618 0.0 0.1 3344 992 ?? Is 2:01PM 0:00.05 nc -w 2000 192.168.1.43 6667 root 47162 0.0 0.1 3532 1196 ?? Is 1:32PM 0:00.02 /usr/local/sbin/sshlockout_pf 15 root 47882 0.0 1.0 52572 10544 ?? Is 1:31PM 0:00.14 /usr/local/bin/php root 48714 0.0 1.0 52572 10544 ?? Is 1:31PM 0:00.14 /usr/local/bin/php root 51899 0.0 0.2 3656 1572 ?? IN 4:50PM 0:03.40 /bin/sh /var/db/rrd/updaterrd.sh root 53185 0.0 0.1 1564 592 ?? IN 6:46PM 0:00.00 sleep 60 root 53472 0.0 0.1 3316 1340 ?? Is 1:32PM 0:00.00 ntpd: priv (ntpd) root 53572 0.0 0.1 1564 592 ?? SN 6:47PM 0:00.00 sleep 5 _ntp 6388 0.0 0.1 3316 1344 v0- I 1:31PM 0:00.20 ntpd: ntp engine (ntpd) root 27567 0.0 0.3 5912 2612 v0- S 1:31PM 0:01.01 /usr/sbin/tcpdump -s 256 -v -l -n -e -ttt -i pflog0 root 27609 0.0 0.1 3316 928 v0- S 1:31PM 0:01.12 logger -t pf -p local0.info root 46972 0.0 0.1 3684 1500 v0 Is 1:32PM 0:00.03 login pam (login) root 47163 0.0 0.1 3656 1392 v0 I 1:32PM 0:00.01 -sh (sh) root 48549 0.0 0.1 3656 1392 v0 I+ 1:32PM 0:00.01 /bin/sh /etc/rc.initial root 18234 0.0 0.2 3656 1556 0 Is 4:50PM 0:00.01 /bin/sh /etc/rc.initial root 24340 0.0 0.2 3672 2364 0 S 4:50PM 0:00.08 /bin/tcsh
-
Now that you got rid of whatever modified source was there, I'm wondering if the remaining issues are just a fact that you're running possibly the worst NICs ever created, and an old Celeron CPU (the lack of cache hits network throughput performance in a firewall scenario hard, huge diff between a Celeron and P4 of the same clock speed for firewall purposes). 80 Mbps through crap NICs and an old Celeron proc may just be tops of what your hardware can accomplish. Using a P4 proc of the same clock speed would be drastically faster for firewall purposes. Better NICs would reduce CPU usage, but not sure if by enough to make much diff.
-
cmb, Replace the network card on the TP-LINK TG-3269 and still loaded processor at 100%. Perhaps you are right, we have to change the CPU.
Or is there another way?
-
Is this really a P4 era Celeron?
http://en.wikipedia.org/wiki/List_of_Intel_Celeron_microprocessors#.22Willamette-128.22_.28180_nm.29I am running a P4-M at 1.2GHz. It can pass >300Mbps. Yes it has 512KB cache vs 128KB in the Celeron but I find it hard to believe you couldn't pass 80Mbps. :-
Interesting information about cache being so important though.
Do you have hundreds of firewall rules? What is using the CPU time in top -SH?Replacing the Celeron with a P4 should be easy though, they are very cheap. I have several here you could have for free if you were near enough. ;)
Steve
-
There have been a couple instances of people on here running really old Celerons that got horrid performance, just slapping a really old P4 with the same clock speed into the same box quadrupled throughput in one case. Way more than I would have expected, the cache makes a massive difference.
-
Replaced the processor Intel (R) Pentium (R) 4 CPU 3.00GHz, speed now works fine on all the 80-90 Mbps)))
But the CPU loading is still a lot, which is 35-85% when downloading a file, which can then be wrong?last pid: 31994; load averages: 0.86, 0.64, 0.37 up 0+00:10:32 10:02:44 94 processes: 5 running, 74 sleeping, 15 waiting CPU: 1.7% user, 0.0% nice, 38.8% system, 21.3% interrupt, 38.2% idle Mem: 45M Active, 15M Inact, 40M Wired, 108K Cache, 23M Buf, 884M Free Swap: 2048M Total, 2048M Free PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 171 ki31 0K 16K RUN 0 7:41 45.75% {idle: cpu0} 11 root 171 ki31 0K 16K RUN 1 5:40 40.58% {idle: cpu1} 0 root -68 0 0K 64K - 1 3:40 36.96% {ath0 taskq} 12 root -28 - 0K 120K WAIT 0 0:32 34.86% {swi5: +} 13 root 55 - 0K 16K sleep 1 0:16 15.48% {ng_queue0} 13 root 55 - 0K 16K RUN 0 0:16 15.38% {ng_queue1} 55418 root 47 0 54620K 20792K piperd 0 0:06 0.68% php 12 root -32 - 0K 120K WAIT 0 0:11 0.39% {swi4: clock} 55510 root 76 0 53596K 20132K accept 0 0:06 0.29% php 0 root 76 0 0K 64K sched 1 1:02 0.00% {swapper} 53409 root 47 0 54620K 20524K accept 1 0:03 0.00% php 53607 root 47 0 53596K 16348K accept 1 0:02 0.00% php 14 root -16 - 0K 8K - 1 0:01 0.00% yarrow 0 root -68 0 0K 64K - 1 0:01 0.00% {ath0 taskq} 33903 root 44 0 4948K 2516K select 0 0:00 0.00% syslogd 20429 root 64 20 3316K 1356K select 1 0:00 0.00% apinger 62838 root 64 20 5564K 3256K kqread 0 0:00 0.00% lighttpd 4 root -8 - 0K 8K - 1 0:00 0.00% g_down 2937 root 76 20 3656K 1440K wait 0 0:00 0.00% sh 49585 root 44 0 3712K 2012K CPU0 0 0:00 0.00% top 3 root -8 - 0K 8K - 1 0:00 0.00% g_up 12 root -68 - 0K 120K WAIT 0 0:00 0.00% {irq18: ath0}
-
Help me, Please!
-
I don't think you have a problem, other than still having poor quality NIC hardware that induces significantly more CPU load than good quality NICs would. It's working fine as is, the fact it's using 30% CPU is irrelevant, you're maxing out your Internet connection without coming close to maxing out your hardware.
-
I have never used L2TP from a pfSense box. I have no idea how much overhead that might represent or which process might show that in top. I have to assume it uses some cpu cycles though which might explain why your box looks more heavily loaded than I would have expected. Chris?
In the top output I assume you are maxing out your 80Mb WAN connection? And using the wifi interface (ath0)?
Steve
-
cmb, A good network card is expensive, from 100$. Say something from the budget, please, but not less effective?
-
stephenw10, At the peak of WAN traffic can reach 90-100 Mbit / s.
And yes, I use a WiFi network for mobile devices and netbooks. -
But were you using the wifi interface at the same time as maxing out your WAN and what bandwidth was ath0 seeing when you ran 'top' above?
80Mbps over wifi is going to more cpu cycles than 80Mbps via ethernet if only because of the encryption. I'm just trying to determine exactly what the conditions were so that I might run a comparable test.Do you know exactly what CPU you used?
Steve
-
…plus you'd never see more than 45Mbps over wifi anyhow.
-
For the sake of getting some comparable figures up, here is the output of top -SH from my home box which is, as I previously mentioned, a 1.8 P4-M underclocked to 1.2GHz. It's quite low end. ;)
Here there is nothing much happening, no thoughput to speak of.
last pid: 57933; load averages: 0.63, 0.79, 0.44 up 155+21:01:22 12:28:57 109 processes: 4 running, 90 sleeping, 15 waiting CPU: 0.4% user, 0.0% nice, 0.0% system, 0.4% interrupt, 99.3% idle Mem: 72M Active, 51M Inact, 62M Wired, 204K Cache, 59M Buf, 300M Free Swap: PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 10 root 171 ki31 0K 8K RUN 3610.3 94.97% idle 11 root -32 - 0K 128K WAIT 462:52 0.00% {swi4: clock} 11 root -68 - 0K 128K RUN 237:24 0.00% {irq18: em0 ath0+} 11 root -68 - 0K 128K WAIT 90:45 0.00% {irq17: fxp2 fxp6} 0 root -68 0 0K 88K - 31:11 0.00% {em1 taskq} 0 root -68 0 0K 88K - 31:01 0.00% {ath0 taskq} 13 root -16 - 0K 8K - 30:51 0.00% yarrow 0 root -68 0 0K 88K - 28:33 0.00% {em0 taskq} 11 root -44 - 0K 128K WAIT 27:13 0.00% {swi1: netisr 0} 47006 root 76 20 3656K 1600K wait 21:20 0.00% sh 0 root -68 0 0K 88K - 20:23 0.00% {em2 taskq} 6810 nobody 74 r30 3368K 1484K RUN 14:37 0.00% LCDd 35 root -8 - 0K 8K mdwait 13:50 0.00% md1 20 root 44 - 0K 8K syncer 11:39 0.00% syncer 2 root -8 - 0K 8K - 11:16 0.00% g_event 5227 root 65 20 46428K 17508K nanslp 11:08 0.00% php 17355 root 44 0 7612K 5184K kqread 10:50 0.00% lighttpd 12 root -16 - 0K 8K sleep 10:35 0.00% ng_queue 11 root -68 - 0K 128K WAIT 9:07 0.00% {irq19: fxp0 fxp4} 4 root -8 - 0K 8K - 8:26 0.00% g_down 29 root -8 - 0K 8K mdwait 7:15 0.00% md0 27610 root 44 0 8464K 4440K select 7:00 0.00% {mpd5} 47631 root 44 0 8984K 6272K bpf 4:10 0.00% tcpdump 44283 dhcpd 44 0 8436K 6552K select 4:02 0.00% dhcpd 3 root -8 - 0K 8K - 3:43 0.00% g_up 33556 root 44 0 3352K 1308K select 3:42 0.00% miniupnpd 9595 root 64 20 3316K 1336K select 3:24 0.00% apinger 1433 root 44 0 4948K 2456K select 2:48 0.00% syslogd 7 root -16 - 0K 8K pftm 2:38 0.00% pfpurge 0 root -16 0 0K 88K sched 2:36 0.00% {swapper} 11 root -64 - 0K 128K WAIT 2:27 0.00% {irq14: ata0} 32508 nobody 44 0 5556K 2824K select 1:28 0.00% dnsmasq 21 root -16 - 0K 8K sdflus 1:25 0.00% softdepflush 14 root -64 - 0K 96K - 1:11 0.00% {usbus2} 47955 root 44 0 3316K 892K piperd 1:10 0.00% logger 18 root -16 - 0K 8K psleep 1:05 0.00% bufdaemon
Here is the same situation, no throughput, but I have the webGUI dashboard open on another machine. You'll see it's quite a resource hog on a low end machine like this and it doesn't appear as a process only in the CPU: system figure.
last pid: 52329; load averages: 1.78, 0.74, 0.31 up 155+20:57:37 12:25:12 111 processes: 4 running, 91 sleeping, 16 waiting CPU: 19.0% user, 0.4% nice, 38.4% system, 0.4% interrupt, 41.8% idle Mem: 73M Active, 51M Inact, 62M Wired, 204K Cache, 59M Buf, 299M Free Swap: PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 10 root 171 ki31 0K 8K RUN 3610.3 32.96% idle 36179 root 76 0 43356K 18008K lockf 0:10 3.96% php 20371 root 76 0 43356K 17896K lockf 0:09 2.98% php 14491 root 76 0 43356K 16680K piperd 0:28 1.95% php 10354 root 76 0 43356K 15464K piperd 0:27 1.95% php 11 root -32 - 0K 128K WAIT 462:52 0.00% {swi4: clock} 11 root -68 - 0K 128K WAIT 237:24 0.00% {irq18: em0 ath0+} 11 root -68 - 0K 128K WAIT 90:45 0.00% {irq17: fxp2 fxp6} 0 root -68 0 0K 88K - 31:11 0.00% {em1 taskq} 0 root -68 0 0K 88K - 31:01 0.00% {ath0 taskq} 13 root -16 - 0K 8K - 30:51 0.00% yarrow 0 root -68 0 0K 88K - 28:33 0.00% {em0 taskq} 11 root -44 - 0K 128K WAIT 27:13 0.00% {swi1: netisr 0} 47006 root 76 20 3656K 1600K wait 21:20 0.00% sh 0 root -68 0 0K 88K - 20:23 0.00% {em2 taskq} 6810 nobody 74 r30 3368K 1484K nanslp 14:37 0.00% LCDd 35 root -8 - 0K 8K mdwait 13:50 0.00% md1 20 root 44 - 0K 8K syncer 11:39 0.00% syncer 2 root -8 - 0K 8K - 11:16 0.00% g_event 5227 root 65 20 46428K 17508K nanslp 11:08 0.00% php 17355 root 44 0 7612K 5184K kqread 10:49 0.00% lighttpd 12 root -16 - 0K 8K sleep 10:35 0.00% ng_queue 11 root -68 - 0K 128K WAIT 9:07 0.00% {irq19: fxp0 fxp4} 4 root -8 - 0K 8K - 8:26 0.00% g_down 29 root -8 - 0K 8K mdwait 7:15 0.00% md0 27610 root 44 0 8464K 4440K select 7:00 0.00% {mpd5} 47631 root 44 0 8984K 6272K bpf 4:10 0.00% tcpdump 44283 dhcpd 44 0 8436K 6552K select 4:02 0.00% dhcpd 3 root -8 - 0K 8K - 3:43 0.00% g_up 33556 root 44 0 3352K 1308K select 3:42 0.00% miniupnpd 9595 root 64 20 3316K 1336K select 3:24 0.00% apinger 1433 root 44 0 4948K 2456K select 2:48 0.00% syslogd 7 root -16 - 0K 8K pftm 2:38 0.00% pfpurge 0 root -16 0 0K 88K sched 2:36 0.00% {swapper} 11 root -64 - 0K 128K WAIT 2:27 0.00% {irq14: ata0} 32508 nobody 44 0 5556K 2824K select 1:28 0.00% dnsmasq
Here I am maxing out my two WAN connections at 20Mbps and 23Mbps (the best I can get at midday) but don't have the dashboard open. The actual figures dance around a bit but this looks like a good average. The interfaces used are fxp5 and fxp6 (the two WAN connections) and em1. Interestingly fxp5 doesn't appear so I assume it shares an IRQ. (Aside: could I improve matters by using a different fxp interface? Hmm)
last pid: 17219; load averages: 0.90, 0.63, 0.43 up 155+21:09:45 12:37:20 109 processes: 5 running, 89 sleeping, 15 waiting CPU: 0.0% user, 0.4% nice, 17.6% system, 6.7% interrupt, 75.3% idle Mem: 73M Active, 51M Inact, 62M Wired, 204K Cache, 59M Buf, 300M Free Swap: PID USERNAME PRI NICE SIZE RES STATE TIME WCPU COMMAND 10 root 171 ki31 0K 8K RUN 3610.4 69.97% idle 11 root -68 - 0K 128K WAIT 237:42 10.99% {irq18: em0 ath0+} 11 root -68 - 0K 128K RUN 91:01 6.98% {irq17: fxp2 fxp6} 0 root -68 0 0K 88K RUN 31:25 6.98% {em1 taskq} 12 root -16 - 0K 8K sleep 10:38 0.98% ng_queue 11 root -32 - 0K 128K WAIT 462:53 0.00% {swi4: clock} 0 root -68 0 0K 88K - 31:01 0.00% {ath0 taskq} 13 root -16 - 0K 8K - 30:52 0.00% yarrow 0 root -68 0 0K 88K - 28:33 0.00% {em0 taskq} 11 root -44 - 0K 128K WAIT 27:13 0.00% {swi1: netisr 0} 47006 root 76 20 3656K 1600K wait 21:21 0.00% sh 0 root -68 0 0K 88K - 20:24 0.00% {em2 taskq} 6810 nobody 74 r30 3368K 1484K RUN 14:38 0.00% LCDd 35 root -8 - 0K 8K mdwait 13:50 0.00% md1 20 root 44 - 0K 8K syncer 11:39 0.00% syncer 2 root -8 - 0K 8K - 11:16 0.00% g_event 5227 root 65 20 46428K 17508K nanslp 11:09 0.00% php 17355 root 44 0 7612K 5184K kqread 10:50 0.00% lighttpd 11 root -68 - 0K 128K WAIT 9:07 0.00% {irq19: fxp0 fxp4} 4 root -8 - 0K 8K - 8:26 0.00% g_down 29 root -8 - 0K 8K mdwait 7:15 0.00% md0 27610 root 44 0 8464K 4440K select 7:00 0.00% {mpd5} 47631 root 44 0 8984K 6272K bpf 4:10 0.00% tcpdump 44283 dhcpd 44 0 8436K 6552K select 4:02 0.00% dhcpd 3 root -8 - 0K 8K - 3:43 0.00% g_up 33556 root 44 0 3352K 1308K select 3:42 0.00% miniupnpd 9595 root 64 20 3316K 1336K select 3:24 0.00% apinger 1433 root 44 0 4948K 2456K select 2:48 0.00% syslogd 7 root -16 - 0K 8K pftm 2:38 0.00% pfpurge 0 root -16 0 0K 88K sched 2:36 0.00% {swapper} 11 root -64 - 0K 128K WAIT 2:27 0.00% {irq14: ata0} 32508 nobody 44 0 5556K 2824K select 1:28 0.00% dnsmasq 21 root -16 - 0K 8K sdflus 1:25 0.00% softdepflush 14 root -64 - 0K 96K - 1:11 0.00% {usbus2} 47955 root 44 0 3316K 892K piperd 1:10 0.00% logger 18 root -16 - 0K 8K psleep 1:05 0.00% bufdaemon
Since my ath0 interface is 54Mbps theoretical max. I probably couldn't max my WAN interfaces through it so I haven't tried. Edit: like Jim just said!
Steve
-
stephenw10
WiFi network does not load at all for me.
CPU Intel Pentium 4 3 GHz -
You haven't been able to get ath0 working?
There are a number of different cpus that could be 'Pentium 4 3GHz'. Any of them should be plenty powerful enough.
Steve
-
stephenw10, ath0 works well, I would say even with no problems.
The processor is really powerful, but it is strange why the peak load is loaded to 85%. Maybe this is weird or is this normal? -
Maybe if you try to replicate the test conditions I used and produced figures we can compare. Your throughput is going to be higher but your cpu is substantially more powerful.
Did you try comparing with the webgui dashboard open and closed? When I first realised that I was quite surprised at the cpu load.I don't know what the exact conditions were when you took your 'top' screenshot earlier but it looks like ath0 and swi5 are using a large number of cpu cycles, perhaps not an unreasonable amount for 100Mbps. However you could not get that bandwidth through an ath0 interface as Jim said.
You also have ng_queue using quite a bit. You have two, presumably because your cpu supports hyper-threading and hence appears as two cpus, but both are far higher than mine.Are you doing QoS or any sort of traffic shaping?
Steve