I just set my receive queues to 2.5k. It's pretty much an issue just limited to traffic that can burst in quickly. Because interactive streams, like games, are on their own separate queue with reserved bandwidth, it seems to not affect anything except their own queues. Because the burst is being rate limited to fit into 48Mb/s in a much smoother fashion via PFSense, than Cisco, my machine cannot ACK data that it has not received yet, so the other side backs down. It seems to be that it's not so much the burst causing issues, but that my machine would normally ACK all of the data in that burst as quickly as it came in, indicating to the other side that I'm ready to receive more, when it really needs to back off before Cisco clamps down hard.
This is mostly me just theorizing, but I am seeing much better results.
I did find that I need to limit my P2P's queue size. During the ramp-up of a heavily seeded torrent, like Fedora, the hundreds of sending end-points would still peak over 50Mb/s on my WAN interface before leveling off, even though PFSense was making sure that I was only getting 48Mb/s. So while a large queue to soak the burst from a single sender works fine, a large queue for many senders that are all ramping up at the same time can cause issues.
P2P also has a lot less burst than Google services. I don't really have the issue of 1gb micro-bursting from Torrents. If I remember correctly, Google uses a custom TCP setup where they purposefully burst the first X bytes at or near full line rate, to make better use of available bandwidth. They let network buffers worry about the bursts. The "problem" is that between my ISP and Google is Level 3, and no congestion. It just lets that 1gb burst right on through 8 hops and 250 miles.