RRD Graphs and >1Gbit/s Rates
-
Also tangentially related… Anyone running 10Gb/s connected pfSense instances out there notice weirdness in the bandwidth graphs? Admittedly I'm not running 2.1 (but instead 2.0.2) but if my bandwidth ever goes over 1Gb/s the RRD bandwidth graphs "break" and don't really show data for that timeframe.
In cacti we had to increase the upper limit of what the max metric would tolerate for 10Gb/s graphing to work when graphing switchports' bandwidth usage. I suspect right now it's 1Gb/s for pfSense (or 1,000,000,000 bps)?
-
I split this post into its own thread because it's not really relevant to the other issue.
This is most likely due to the max value restrictions placed on the RRD data when making the data files, the high samples are probably being tossed out as invalid.
They're controlled by this in /etc/inc/rrd.inc:
/* Asume GigE for now */ $downstream = 125000000; $upstream = 125000000;
Those probably need bumped – HOWEVER -- that would also require removing your RRD files and starting over fresh.
I imagine people on both 10Gbit/s and multi-1G-lagg interfaces both hit that.
-
Added a bug for it: https://redmine.pfsense.org/issues/2979