TX/RX Blinking LEDs on WRAP platform
-
That will probably affect performance a lot. Don't think that would be a good thing to do.
-
That will probably affect performance a lot. Don't think that would be a good thing to do.
Okay, maybe so, but how would I go about doing it in order to find out :)
I mean, I have a server that has headers on it's mainboard for "LAN LED 1" and "LAN LED 2", so I know there is a way to do this right. Not to mention that nearly every switch on the planet has blinking LEDs for interface activity. Granted, the WRAP platform is a whole different bear…
Any ideas on how I can implement this, or if you don't know, where I should look to find out how I can make this happnen?
-
Well, I decided to play around a little bit tonight, and I came up with a horrible shell script way to do it; but it works and is kinda cool :)
Although the CPU usage page on my WRAP went up from 3% - 8% to 17% - 28%, I've not noticed any throughput issues.
In case anyone wants to see my crude method:
Step 1) If your / filesystem isn't already RW, make it that way (mount -o rw -u /)
Step 2) If your /cf filesystem isn't already RW, make it that way (mount -o rw -u /cf)
Step 3) Create a file (I called mine /bin/blink) and make it executable (chmod +x /bin/blink)
Step 4) Put in something like this:#!/bin/sh # Interface to check nic=ath0 # Set initial temp variables ti=`netstat -I ${nic} -nWb -f link | tail -n 1 | awk '{print $7}'` to=`netstat -I ${nic} -nWb -f link | tail -n 1 | awk '{print $10}'` # Begin infinite loop while [ 1 ] do # Run every 1 second(s) sleep 1 # Check in/out bytes pi=`netstat -I ${nic} -nWb -f link | tail -n 1 | awk '{print $7}'` po=`netstat -I ${nic} -nWb -f link | tail -n 1 | awk '{print $10}'` # Copmute differance ei=$(expr $pi - $ti) eo=$(expr $po - $to) # Check for changing LED status due to 5KB increase of input bytes if [ ${ei} -gt 5120 ] then echo f1 > /dev/led/led3 ti=${pi} # Check for changing LED status due to 5KB increase of output bytes elif [ ${eo} -gt 5120 ] then echo f1 > /dev/led/led3 to=${po} # Turn off LED else echo 0 > /dev/led/led3 fi done
Step 5) Add a cute entry to your config.xml's <system>block to start at boot (<shellcmd>sh /bin/blink&</shellcmd>)
Step 6) If needed, RO your / and/or /cf filesystems (mount -o ro -u / and/or mount -o ro -u /cf)
Step 7) Reboot and enjoy!Of course, if anyone else has a better idea, I'd love to see your thoughts.</system>
-
If you re-write that in c or c++ I bet your CPU usage would decrease. Also you can use /etc/rc.conf_mount_rw and /etc/rc.conf_mount_ro instead out the mount commands.
-
If you re-write that in c or c++ I bet your CPU usage would decrease. Also you can use /etc/rc.conf_mount_rw and /etc/rc.conf_mount_ro instead out the mount commands.
Oh man, it's been so long since I've written anything in C/C++, it would probably take me four times as long as it should :)
Anyone out there care to give it a shot and post your results? I'd be curious to see if there is a decent CPU usage decrease :)
-
I would rather encourage a more general led-package where you can assign different events to the led's like turn on on new alert and so on. Depending on cpu usage showing traffic could be integrated there as well.
-
I spent a few minutes on this and used code from MiniUPnPd. Heres the link to download
code: http://wgnrs.dynalias.com:81/pfsense/blinkled/blinkled.c
binary: http://wgnrs.dynalias.com:81/pfsense/blinkled/blinkledTo compile
gcc blinkled.c -lkvm -o blinkledTo run
./blinkled -i auth0 -l /dev/led/led3 -dTo run as background process
./blinkled -i auth0 -l /dev/led/led3 -
Hmmm…
CPU usage appears to be nearly identical. Doing a 'ps aux' on the shell console it shows that after the same period of time (1 minute) the memory usage was identical at 1.0%, CPU was .5 with the C-based version, .7 with the shell version and the time column showed 1.03 seconds for the C-based version compared to 1.04 with the shell version. The GUI showed no real change in CPU usage.
I should also mention the shell version I was benchmarking is a bit different than the one I originally posted. It now controls TWO leds: the WAN interface in/out packets change led2 and the wireless interface in/out packets change led3.
Any ideas why there wouldn't be any noticeable change? Any way this can be done more efficiently?
Thanks for everyone's thoughts, and thanks to rsw for submitting the sample code!
-
Yes, Ryan is on the right path. This code is very good…
-
The c code I posted is the most efficient way of doing this. It reads the values directly from kernel memory.
You stated "Although the CPU usage page on my WRAP went up from 3% - 8% to 17% - 28%, I've not noticed any throughput issues." So I'm confused on how you state now that the GUI isn't showing any CPU change.
For testing purposes I did a comparison myself using the webgui CPU usage. Base CPU usage is 1-2%. When using your original script CPU usage is 14-18%. With the c program CPU usage is 1-3%.
I personally do not like the shell script idea. If you look in the netstat code it reads from the kernel memory to get the values it outputs to the console. Calling netstat multiple times and parsing the output with tail and awk is inefficient. For a one time deal this would be fine, but to have something running as a background process every second, this will affect system performance.
-
The c code I posted is the most efficient way of doing this. It reads the values directly from kernel memory.
You stated "Although the CPU usage page on my WRAP went up from 3% - 8% to 17% - 28%, I've not noticed any throughput issues." So I'm confused on how you state now that the GUI isn't showing any CPU change.
For testing purposes I did a comparison myself using the webgui CPU usage. Base CPU usage is 1-2%. When using your original script CPU usage is 14-18%. With the c program CPU usage is 1-3%.
I personally do not like the shell script idea. If you look in the netstat code it reads from the kernel memory to get the values it outputs to the console. Calling netstat multiple times and parsing the output with tail and awk is inefficient. For a one time deal this would be fine, but to have something running as a background process every second, this will affect system performance.
I have noticed no increases in memory or cpu capacity beyond about 700K of memory usage. Under no conditions did the binary ever go above 1%.
jdijulio: Please watch a top list, and rank by CPU usage or idle usage. Your chasing the wrong zombie here..
-
I did some benches with rsw686's program running and the max throughput was not affected by it. I vote for adding it at system>advanced with an interface selction dropdown and default it to disabled. This way you could make WLAN-Traffic visible on LED3 too ;D
Btw, Scott added some code to make LED2 on a WRAP and the ErrorLED on Soekris blink on unacknowledged alerts. This feature already can be found in the latest snapshot.
-
Btw, Scott added some code to make LED2 on a WRAP and the ErrorLED on Soekris blink on unacknowledged alerts. This feature already can be found in the latest snapshot.
This simply is w00t :D (sorry, had to throw that in)
-
The c code I posted is the most efficient way of doing this. It reads the values directly from kernel memory.
You stated "Although the CPU usage page on my WRAP went up from 3% - 8% to 17% - 28%, I've not noticed any throughput issues." So I'm confused on how you state now that the GUI isn't showing any CPU change.
For testing purposes I did a comparison myself using the webgui CPU usage. Base CPU usage is 1-2%. When using your original script CPU usage is 14-18%. With the c program CPU usage is 1-3%.
I personally do not like the shell script idea. If you look in the netstat code it reads from the kernel memory to get the values it outputs to the console. Calling netstat multiple times and parsing the output with tail and awk is inefficient. For a one time deal this would be fine, but to have something running as a background process every second, this will affect system performance.
I just re-download the binary from rsw's post and also re-ran my benchmarks on my WRAP unit. With nothing additional running (no shell script and no C program), I had a base CPU usage of between 3% and 6% in the WebGUI, and according to a 'ps aux' I had between 79% - 90% of the CPU in IDLE state.
I did two sets of tests; one while the WRAP had no traffic going through it, and one while a single wireless client was preforming a "speed test" through it.
Set 1 (No wireless clients)
–-----------------------When I used the shell script, over a period of 1 minute I had an average CPU usage of 22.5% in the WebGUI. A 'ps aux' reported 0.1% of CPU usage, and 1.0% memory usage, with a time of 0.88s. A 'ps aux' also said that I had 73.2% of the CPU in IDLE state.
When I used the C program, over a period of 1 minute I had an average CPU usage of 4.6% in the WebGUI. A 'ps aux' reported 0.0% of CPU usage, and 0.6% of memory usage, with a time of 0.08s. A 'ps aux' also said that I had 89.6% of the CPU in IDLE state.
Set 2 (One wireless client)
When I used the shell script, over a period of 35 seconds I had an average CPU usage of 35.6% in the WebGUI. A 'ps aux' reported 0.0% of CPU usage, and 1.0% memory usage, with a time of 0.50s. A 'ps aux' also said that I had 70.3% of the CPU in IDLE state.
When I used the C program, over a period of 35 seconds I had an average CPU usage of 20.6% in the WebGUI. A 'ps aux' reported 0.0% of CPU usage, and 0.6% of memory usage, with a time of 0.05s. A 'ps aux' also said that I had 80.6% of the CPU in IDLE state.
Clearly, when I tested this earlier something else must have been going on.
I (honestly) expected the C program to be faster than the shell script anyway. I wasn't trying to "put down" rsw's work or anything by my follow up post; I just wasn't seeing the same thing.
My only thought is that maybe the shell script process wasn't killed properly and was still running in the background at the time that I ran the benchmarks on the C program originally, which clearly could have skewed the results. I'm not sure, but either way, it does in fact appear that the C program preforms quite well (again, as I had hoped/expected).
Sorry to ruffle everyone's feathers...
Slightly off topic, can someone explain to me how the CPU usage on the WebGUI is calculated, compared to what I'm seeing from a generic 'ps aux' on the shell console? The numbers never seem to jive, and I'm curious if I'm just reading it wrong, or if it's calculated differently than I would expect.
Thanks everyone!
PS: Hoba - I agree that putting rsw's code into the snapshots and eventually into a stable version! It's a neat feature for folks who have hardware that support it.
-
Autoexec blinkled when the wrap boot:
create .sh file in /usr/local/etc/rc.d/ named for example blinkled.sh with this examples lines:
/bin/blinkled -i sis1 -l /dev/led/led2 (WAN)
/bin/blinkled -i sis2 -l /dev/led/led3 (OPT1)then, chmod +x /usr/local/etc/rc.d/blinkled.sh
All .sh that are in this path are exec at boot.
Saludos.
-
how do you create the .sh file?
-
Like noted above the shellscript will consume rather much cpu time. Ryans solution to this was much nicer. I wonder why we didn't follow that thought and integrated it. ;)
-
i also lost the RRD graphs after using the shell command version
-
Worked perfect for me added .sh and executes everytime with bootup and very very very little cpu usage, THANKS! I think this should def be put in the next release on the embedded image but disabled by default and added somewhere under the system menu.