@w0w
It’s definitely not MY personal requirements, this affect EACH pfSense user. More (in case office/small/middle company in US/Europe ) or less (web surfer from Tanzania, techno-geek at home network or DevOps at home).
But making decision based on wrong testing strategy and wrong instruments -> wrong way and certainly wasting time and effort.
Agree?
Before in this thread You wrote:
——
RACK and BBR will mostly have an effect running on endpoints, like streaming servers or tunnel endpoints. Since pfSense is a firewall there are not so many situations when BBR or RACK will give any benefit,
——
TCP congestion control is managed by endpoints (sever and/or client e.g. web browser and web server), so anything not placed on the firewall is not using cognestion control, like newreno or any other.
Endpoint means that firewall iself is an endpoint, then congestion control is applied, otherwise all other traffic is just passed to upstream/downstream interface.
——-
I friendly pointing You that this is not correct and by saying “TCP congestion control is managed by endpoints” You show that You not deeply understanding how exactly QUIC (and so-called HTTP/3) working and how overall CC strategy, BSD/NIX TCP stack parameters, NICs parameters, ISPs switches on aggregate levels, ISPs routers (with sophisticated routing policies, shapers and limiters) on core level routers impact on packets flow back and forth between external users and Your application’s server.
And now You make that decision based on … ordinary SpeedTest ? Really wrong way to comparing CCs!
P.S.
Do You know that small (but important) example: Your server’s ~72Mb/s with 1ms ping -> after 1% PL (packet loss) on a user’s “last mile” BECOME ~54Mb/s with 4ms ping ~> after +100ms RTT added by “fat magistral” BECOME 5,7Mb/s with 104ms ping.
Only 1% of PL and +100 RTT make Your “magic server’s 72Mb/s” to “5,7Mb/s” !
Imagine, what happened with 2-3% PL and 80-120 RTT ?
This all about YOU NEED MAKE PROFESSIONAL-GRADE MEASUREMENTS WITH RIGHT TOOLS !