To give a general explanation of this.
TCP defines a window as how many segments are in-flight
Most TCP algorithms use packet loss as an indicator to back off
Buffer bloat means hundreds of milliseconds of data can be buffered and trickled in to you
Now imagine this. Your bloated buffers can hold 500KiB of data. Netflix wants to send you an average of 5Mb/s in 250KiB chunks while reusing TCP connections. If Netflix sends you 250KiB of data at 10Gb/s, while you can't receive it that fast, you cable/DSL modem's buffer holds all of the data. Since no packets are dropped, Netflix never knows to back off. Since all of the data fits within the TCP window, and the bloated buffer can hold the entire window, you will get line-rate bursts.
This is why bufferbloat is bad.
I had a variation of this. My ISP has an elastic buffer that allows bursts through. Instead of the buffer soaking the burst and slowly trickling it through, it let the burst pass, then started to clamp down. This meant my computer will receive the data at full 1Gb/s even though, at the time, I had a 100Mb connection. My computer would ACK all of the data, making the send think I actually have a 1Gb connection. As they continued to send 1Gb/s at me, my ISP's shaping algorithm would start to restrict the bandwidth and started to drop packets. This would cause a burst of packet-loss at the start of any heavy low-latency TCP connection.
I actually fixed this by having PFSense shape my downloads. Instead of just telling the 1Gb burst through, PFSense would buffer it and start dropping some packets prior to my ISP doing so. This did two things. 1) It delayed the packets 2) It dropped fewer packets early on before the sender ramped up to full speed.