RRD Graphs Incorrect
-
Is it UDP traffic?
If yes then you should see receiver's side.
Not everything you are pushing goes through firewall.No, it's standard iperf TCP traffic. In this case, all traffic did go through the firewall, as the laptop was plugged directly into the LAN port of the pfSense box, which was in bridge mode. The WAN was plugged into my home network and the iperf server was on my desktop machine.
-
Which switch "-??" do you use to regulate TCP bandwidth in iperf?
-
Which switch "-??" do you use to regulate TCP bandwidth in iperf?
I'm using fairly standard options:
iperf -c 192.168.1.20 -w 64000 -N
Basically, increase the TCP window to 64000 bytes and enable TCP No Delay.
-
Did you try to utilize it with about 300Mb/s ?
-
Did you try to utilize it with about 300Mb/s ?
Do you mean limit iperf to 300Mbps? I believe you can only specify bandwidth when using UDP.
-
Yes iperf.
-
Yes iperf.
You can only specify bandwidth with iperf when using UDP. With TCP, you can specify the amount of data to transfer or the length of time to transfer, but not the rate.
Usage: iperf [-s|-c host] [options]
iperf [-h|–help] [-v|–version]Client/Server:
-f, –format [kmKM] format to report: Kbits, Mbits, KBytes, MBytes
-i, –interval # seconds between periodic bandwidth reports
-l, --len #[KM] length of buffer to read or write (default 8 KB)
-m, –print_mss print TCP maximum segment size (MTU - TCP/IP header)
-o, --output <filename>output the report or error message to this specifie
d file
-p, --port # server port to listen on/connect to
-u, --udp use UDP rather than TCP
-w, --window #[KM] TCP window size (socket buffer size)
-B, –bind <host> bind to <host>, an interface or multicast address
-C, --compatibility for use with older versions does not sent extra msgs
-M, --mss # set TCP maximum segment size (MTU - 40 bytes)
-N, --nodelay set TCP no delay, disabling Nagle's Algorithm
-V, --IPv6Version Set the domain to IPv6Server specific:
-s, --server run in server mode
-D, --daemon run the server as a daemon
-R, --remove remove service in win32Client specific:
-b, --bandwidth #[KM] for UDP, bandwidth to send at in bits/sec
(default 1 Mbit/sec, implies -u)
-c, –client <host> run in client mode, connecting to <host>-d, --dualtest Do a bidirectional test simultaneously
-n, --num #[KM] number of bytes to transmit (instead of -t)
-r, –tradeoff Do a bidirectional test individually
-t, --time # time in seconds to transmit for (default 10 secs)
-F, --fileinput <name> input the data to be transmitted from a file
-I, --stdin input the data to be transmitted from stdin
-L, --listenport # port to recieve bidirectional tests back on
-P, --parallel # number of parallel client threads to run
-T, --ttl # time-to-live, for multicast (default 1)Miscellaneous:
-h, --help print this message and quit
-v, --version print version information and quit[KM] Indicates options that support a K or M suffix for kilo- or mega-
The TCP window size option can be set by the environment variable
TCP_WINDOW_SIZE. Most other options can be set by an environment variable
IPERF_<long option="" name="">, such as IPERF_BANDWIDTH.Report bugs todast@nlanr.net/dast@nlanr.net</long></name></host></host></host></host></filename>
-
Yes, I know that is why I asked the first question.
For RRD it does not matter what kind of traffic you are passing. So, could you please transfer 300Mb/s UDP and see what you get on graph? -
Yes, I know that is why I asked the first question.
For RRD it does not matter what kind of traffic you are passing. So, could you please transfer 300Mb/s UDP and see what you get on graph?Sure, I'll give that a shot first thing tomorrow AM and report back.
-
When you'll be doing testing please get output of below command for 3-4 minutes
/usr/bin/netstat -nbf link -I XXX -w60
where XXX is the name of your interface.
-
You are right. It seems rrd graphs do not show correct values when trafic goes above 500Mb/s.
You might wish to try the next:- kill your current updaterrd.sh process
- modify script /var/db/rrd/updaterrd.sh replacing line
sleep 60
with```
sleep 303) start new one by
/bin/sh /var/db/rrd/updaterrd.sh &
Note: your graphs will be updated two times more often than before which will create some additional cpu usage.
-
You are right. It seems rrd graphs do not show correct values when trafic goes above 500Mb/s.
You might wish to try the next:- kill your current updaterrd.sh process
- modify script /var/db/rrd/updaterrd.sh replacing line
sleep 60
with```
sleep 303) start new one by
/bin/sh /var/db/rrd/updaterrd.sh &
Note: your graphs will be updated two times more often than before which will create some additional cpu usage.
You're the man Eugene. I didn't have time to set everything up to test this today and it looked like it was going to be this weekend before I was going to be able to get around to it. So will lowering the sleep timer effectively increase the sampling rate of RRD? I.E. the "1 minute average" will become a "30 second average?"
-
Unfortunately I do not the answer to this question.
I tend to think yes - it will become 30 sec average but I do not know rrd well enough to stay firm on this. More research needed. -
The counters are 32 bit, the counter will wrap before it samples again at that high of throughput. Offhand I'm not sure how increasing the sampling rate will affect the RRD output.
-
pfsense-utils.inc has to be changed to have correct RRD output.
If you make changes described above then RRD will be getting samples every 30 sec but calculating as if they were 60 seconds data. Though on my test system I get correct results (the same as in iperf) I do not know why.Guys say that this is not going to be a problem in 2.0 as 64 bit counters will be used.
-
Could you please share your hardware specs with this level of performance?
Thanks. -
Could you please share your hardware specs with this level of performance?
Thanks.Here you go:
Intel Pentium Dual Core E5200 (2.5GHz)
4GB DDR2800 (2GB works just as well)
Supermicro X7SBL-LN2
Intel 82573V & 82573L PCI-E NICs -
You can safely increase the rate to 30 seconds, it should not affect the graphing.