I think one way to solve this problem might be to use limiters with multiple queues and weights on the queues to limit the amount of bandwidth one machine can consume.
For example, assuming you had two machines, you could use limiters and create two queues (let's call them queueA and queueB) under your upload and download limiter and assign each a weight of 50. Then create the necessary firewall rules to pass traffic from machine 1 through queueA and and traffic from machine 2 through queueB. This should ensure that each machine will get a least 50% of the bandwidth and more if the other machine is just idle. I've got a similar setup to this, but it's done by subnet rather than by machine/host (i.e. to ensure each individual subnet gets at least a certain % of bandwidth if the connection is loaded down).
A further option would be to use multiple limiters with multiple queues. For instance you could create an upload and download limiter for machine A and an upload and download limiter for machine B and limit the bandwidth of each set of limiters to 50% of your connection speed. Then underneath those limiters you could create queues to prioritize your traffic. While this allows for easier prioritization of traffic in multiple queues, the downside is that machine A and B will see no more than 50% of your bandwidth vs. the full bandwidth if the other machine is idle. That would be the tradeoff. Note that you can still use weights on your queues in this approach as well if you wanted to guarantee bandwidth on certain types of traffic on a given host (e.g. P2P vs. HTTP, etc.)
I realize there are limitations to these approaches, and if you have many machines then it's probably not practical. There might be more elegant solutions out there, but unfortunately I'm not aware of an easier way to share bandwidth equally between hosts whose traffic all goes through the same queue (vs. setting up multiple queues).
Hope this helps.