I'm not sure if it's dumb luck, a successful configuration or something else entirely, but I've been able to get the HFSC shaper to work the way I want it the two times I've used it. The second time was in an environment with three LAN interfaces, and from what I can tell, the shaper is actively prioritizing traffic among the internal interfaces in the way I anticipated. Granted neither pfSense deployment is earth-shattering (both are home environments), but from skimming the forums posts on this subject, I thought documenting success using the shaper with multiple LAN interfaces might be of interest.
The configuration consisted of a single WAN interface and three LAN interfaces: Verizon, Work & LAN. The firewall is actually a friend's & we both teamed to sort out the necessary shaper configuration. The goals were simple: Verizon traffic takes precedence (he has FiOS & on-demand videos can use a portion of his "Internet" bandwidth), Work traffic trumps LAN bandwidth but not Verizon (employer-provided VoIP phone & other equipment when he works from home is connected to the Work interface; LAN is for generic home internet), any interface should be able to utilize all available idle bandwidth (but release it for high priority traffic) and no interface should be starved of bandwidth regardless of priority (the "fair service" in HFSC takes care of this).
We first ran through the multi-LAN wizard, but didn't specify any ports or protocols to prioritize, rather used the wizard to stipulate upload & download bandwidth and build the various queues on the interfaces. Once that was completed, we built a VZWeb queue on the Verizon interface, a WRKWeb queue on the Work interface and a LANWeb queue on the LAN interface as children under the Internet queue on the each of the interfaces. These three queues were duplicated on the WAN interface and placed directly under the root queue.
Priority was described via a percentage in the m2 column of the Link Share row as I've read somewhere HFSC doesn't adhere to the numerical priority label. I believe Link Share overrides Bandwidth but the percentage was duplicated in Bandwidth field for the sake of completeness. VZWeb was given 30%, WRKWeb 15% and LANWeb 5%. The Link Share m2 metrics on the ACK queue were left unchanged, but we did plug in 5% for the Realtime m2 value as a safety net.
The rules were a little trickier, couldn't get the floating rules to properly direct traffic into the queues, but specifying queues on existing rules in the interface tabs did the trick (e.g. allow LAN to any rule where LAN net is the source). We ran multiple non-interference (start with traffic on higher priority Verizon, then Work & then LAN) and non-blocking tests (going the other way with LAN first, then Work, then Verizon) and all interfaces used the appropriate amount of traffic. LAN was the only one that dropped packets, which occurred when this interface surrendered bandwidth to the other two.