DSL Reports speed test causing crash on upload

  • I've seen this issue reported here, and it looks like there is a bug (not sure?), and the solution was just to turn traffic shaping off. https://forum.pfsense.org/index.php?topic=121212.0
    I'm not sure if this is an actual bug, or something I'm doing wrong, so my apologies if this is the wrong place to post this.

    I have 2.3.4 on a Super Micro C2758 and I can repeatedly crash my system using the DSL reports speed test here: http://www.dslreports.com/speedtest
    The upload portion of the test will crash PFsense when I have traffic shaping enabled.  For certain types of TS, the entire machine will lock up and reboot, for my current configuration, it temporarily crashes my interfaces, but comes back up a few seconds later.

    I have gigabit Fios service, and I've been running the test on a Windows 7 machine.  It's the only machine that can cause the crash, which is interesting because I believe another user mentioned this same glitch using Windows Vista.

    If I run the test on its own with little to no other activity on my network, everything is stable and I get A+ ratings every time. But, when I have other traffic going on with other machines (in the 20-100Mbit range), then I can reliably crash pfsense by running the test.

    I do have a crash dump from the last time it happened. Should I post it here in this message?

    I've attached a screenshot of how my TS is currently configured.  I established my maximum upload/download at 870Mb. I can get speed tests as high as 940, but I'd rather throttle it back down to keep it stable.

  • I'm glad I ran across this post.  I have been having a very similar problem and it has been bugging me for the last couple months.  While my hardware is slightly different from yours, I also experienced system crashes during the upload portion of the DSL Reports speed test.  I tried tweaking my traffic shaping settings, but nothing would help.  Eventually it would still crash during the upload and require a reboot.  I followed the thread you linked above (and a few related ones) and noticed that others have experienced similar stability issues, however with version 2.4 (like you I'm on 2.3.4).  For instance, see here:


    Despite being on 2.3.4. I tried setting the igb queues to 1 today and so far things appears stable.  Nothing I have tried so far (including running the DSL Reports speed test several times) has resulted in a crash.  Here's to hoping this temporarily solves the problem until 2.4 is released (it looks like this issue may have already been fixed for 2.4).

    Hope this helps.

  • Thought I would follow up on this thread to give an update:  A week and half later things are still rock solid (no crashes).  From what I can tell, changing the igb queues to 1 did the trick for me.  I'm not 100% sure if the instability is related to the igb driver itself and default number of queues, traffic shaping, or a combination of the two.  In any case, modifying the igb queues manually is a workaround worth looking into if the system becomes unstable when the network interfaces are under high load (i.e. close to maxed out).

  • I had this exact issue on an HP box using HP's version of the Intel i340T4 (82580 based) NIC. Adding hw.igb.num_queues="1" fixed it and things have been rock solid since.

  • @nycfly:

    I had this exact issue on an HP box using HP's version of the Intel i340T4 (82580 based) NIC. Adding hw.igb.num_queues="1" fixed it and things have been rock solid since.

    In the case of HP rebadged devices, they sometimes mess with the bioses and may disable certain features that can make the device unstable if you treat it like a retail. Like the HP Intel i-350 has MSI-X hardware feature disabled, which can cause issues if you enable thinking it's a normal i350.

  • It has been a couple months since I posted in this thread and I thought I would report back with some more findings.  I'm starting to become convinced that there isn't actually some type of bug out there, but rather that pfSense (and FreeBSD for that matter) need to be better tuned for higher speed (e.g. gigabit) internet connections.  There are many great guides out there for this, here are a couple links (the first being a very nice thread that Harvy66 started on the subject):


    After adjusting/changing various tuning parameters and testing for stability, I decided to increase the number of queues and set hw.igb.num_queues="4" from "1" about a month ago.  I actually have an Intel i350 4 port NIC and Intel 211 2 port NIC in my pfSense box - the former supports up to 8 queues, the latter up to 4.  I figured I'd slowly try to increase the number of queues to see if the system stayed stable.  Now after about a month with no issues whatsoever, I decided tonight to remove the num_queues restriction completely (i.e. use the default and let the system decide, which means the i350 interfaces use 8 queues and the i211 interfaces use 4).  After performing some initial testing things appear stable so far - hoping it will stay this way.  Will report back again in a few weeks with another update.

    I hope this helps - would be curious to see if tuning network parameters also helps others overcome the num_queues workaround while maintaining stability.

  • Thank you, tman222, that is very interesting and helpful. Indeed I lifted Harvy66's performance tuned parameters from the other thread, removed the limit on queues and some initial stress tests (the dslreports speedtest and simultaneous p2p download of all 12 current Ubuntu flavors) do not result in any kernel panics.

    This is what my /boot/loader.conf.local now looks like:

    # hw.igb.num_queues="4"

    What parameters did you end up with?

  • That's great to hear!  Here are parameters I'm currently using:


    Note that some of these values are rather large - probably larger than they need to be.  However, I have a decent amount of memory (16GB) in my pfSense box so I figured I can afford to be a bit more generous.  Now that I'm looking at it, I can probably increase net.pf.source_nodes.hashsize, but I think I need to read up on it a bit first.

  • Just thought I would follow up on this post with an update about a week later:  I recently upgraded my system to 2.4.0-RC and kept all the tuned network settings.  After performing some additional testing things are still stable for me and as mentioned above I no longer limit the interface queues in order to keep stability.  It seems to reason then that tuning network parameters is definitely worth trying first if one experiences stability issues (as described in this thread) before limiting the queues on the interfaces.

  • Mine has also been stable with the tuned parameters so definitely agree this is a better solution than limiting queue length.