Per IP traffic shaping–share bandwith evenly between IP addresses??
-
for Caching huge files
With 0.0000000000000000023% hitrate. ::)
So this will work rite if i configure Proxy server in non transparent mode ?
That or move the stupid thing to another box. Squid-induced breakage is totally OT on this thread.
actually caching is very helping for me , i am not concerned about small file caching , mainly .exe download's and it works for me
Also Per user Net usage summary also , both is working for me
-
Thanks Sideout for the tips, I tried both ways using the default LAN rule and also the tip you gave me i.e. a new rule above the default LAN rule with the limiters applied. no change in results, however i noticed that if both clients are laptops on torrents (i.e. equal load) then it does some bandwidth balancing.
attached are the screenshots of my configuration, the graphs and the limiter info.
i created similar and applied
Setup is Pfsene with squid in non transparent mode with wpad - i am able to see Unique quoue/pipe for each local IP's in diag but when one user started downloading in bitorrent entire bandwidth was given to that user ,(test was performed with 2 user i normal user downloading in Http file and other downloading in bitrorrent , torrent user was getting speed (90%)
-
use 2.1.5 works perfectly with queue
-
Tested on 2.2.4 and works.
As I remember TC and squid don't work together. -
Isn't it the case that bandwidth is always evenly shared, even if you do nothing with limiters etc. Thats also the case with wifi as far as I experienced.
1 client 100%
2 clients download each 50%
3 clients each 33%
etc…I mean the router is trying its best to handle every client right?
Im going to run some tests here.
Also is it really necessary to have 2 sub queues if you have 2 vlans? Can't you just specify the same up and down limiter for both?
With outgoing rules you have to specify the limiter for every rule you want it to apply to. I only set it up for httphttps. -
Isn't it the case that bandwidth is always evenly shared, even if you do nothing with limiters etc. Thats also the case with wifi as far as I experienced.
1 client 100%
2 clients download each 50%
3 clients each 33%
etc…Not really. Because of the way fixed sized tail-drop FIFO queues interact with TCP, you tend to get a single or a few dominate flows. If you use a fair queue or head-drop queue like codel, you will get a much better distribution of bandwidth among all flows. More advanced traffic shapers, like Cake, can give even per device bandwidth distribution and latency isolation, while also giving per flow bandwidth distribution and isolation for any given device.
The day PFSense gets Cake, that's what I'm using. It's not even ready for Linux yet, so don't hold your breath.
-
If it really helps.. then why isnt it transparently buildin already? There are lots of routers out there handling lots of clients perfectly fine without it.. including pfSense.. but if it is indeed better and fairer.. then why not enable it on all traffic by default? Cake == Codel? Sounds interesting. And yes, that checkbox 'share bandwidth evenly between all clients' is still missing ;) IF it really adds something!
-
If it really helps.. then why isnt it transparently buildin already? There are lots of routers out there handling lots of clients perfectly fine without it.. including pfSense.. but if it is indeed better and fairer.. then why not enable it on all traffic by default? Cake == Codel? Sounds interesting. And yes, that checkbox 'share bandwidth evenly between all clients' is still missing ;) IF it really adds something!
Why is it not enabled by default? Partly because the definition of "better" & "fairer" are subjective.
Regarding whether it really works… This thread is popular for a reason. :)
-
If it really helps.. then why isnt it transparently buildin already? There are lots of routers out there handling lots of clients perfectly fine without it.. including pfSense.. but if it is indeed better and fairer.. then why not enable it on all traffic by default? Cake == Codel? Sounds interesting. And yes, that checkbox 'share bandwidth evenly between all clients' is still missing ;) IF it really adds something!
PFSense seems to try to balance "here's a check box that does black magic" and "I want to know exactly what the firewall is doing", especially since PFSense's primary users are enterprise users who know what they're doing. The more you "transparently" do by default or black-magic check boxes you create, the less control the user has.
-
Pfsense has equivalent functionality with limiters.
http://doc.pfsense.org/index.php/Traffic_Shaping_Guide#Limiter
Limiters assign bandwidth to IP addresses. This means that I can't use the whole pipe if nobody else is using the connection. I originally used PFSense with limiters but everyone got pissed that their internet was only 1/10 the speed all the time. m0n0wall dynamically assigns bandwidth based on use. 90% of the time you get the whole connection, it only slows down when someone else is also using it.
I have implemented exactly what your talking about by using two parent limiters (up and down) and creating three child queues under each (the child queues are for each of my three lan subnets. The upload child queues have a 'source address' mask set and the download queues have the 'destination address' mask set.) I set the default pass rules for said subnets to use their appropriate child queues.
I do not know if the limiters will behave in the desired fashion if you are assigning traffic directly to a parent limiter, even with the mask set. At the very least, a single child queue, used in the way I am, would work.
I have used your documentation to set up the limiters on my WAN and LAN interfaces. I am using the traffic graph to monitor activity, but how do I know that it is working? (Other than the obvious fact that all my clients can use the internet simultaneously.)
Thanks for your tutorial!
-
foxale08, can you please tell me if I am on the right path here? I have a LAN, OPT1 and OPT2. LAN has rules set up as used in foxale08's tutorial. OPT2 is my public network where I don't trust the devices and I want to limit them to like 1/5 of my bandwidth. You can see my screenshots below. The limiter seems to be working fine on the LAN, and I tried to mimic the tutorial for OPT2 to see if I could do it there as well. Am I on the right track?
Thanks guys!
-
No idea what you're trying to do with those rules. First you pass OPT1 net to !LAN net then you pass any any.
Nothing will be blocked and only traffic to LAN net will be limited because everything else will be passed by the first rule that doesn't have any queues set.
-
No idea what you're trying to do with those rules. First you pass OPT1 net to !LAN net then you pass any any.
Nothing will be blocked and only traffic to LAN net will be limited because everything else will be passed by the first rule that doesn't have any queues set.
Thank you for that Derelict. What I am trying to do is to prevent any devices on OPT1 from accessing any devices on the LAN interface (my private network). Should I set a limiter on both rules?
-
If you want to BLOCK traffic then BLOCK it.
On OPT1:
Reject IPv4 source OPT1 Net dest LAN net protocol any
Then you probably also want to:
Reject IPv4 source OPT1 Net dest This Firewall protocol any.Above those you want to pass any local assets you want OPT1 Net to access like DNS.
And, as an aside, none of this has anything to do with limiters or this thread.
-
If you want to BLOCK traffic then BLOCK it.
On OPT1:
Reject IPv4 source OPT1 Net dest LAN net protocol any
Then you probably also want to:
Reject IPv4 source OPT1 Net dest This Firewall protocol any.Above those you want to pass any local assets you want OPT1 Net to access like DNS.
And, as an aside, none of this has anything to do with limiters or this thread.
I appreciate your input. I don't know why I didn't put in a reject rule in there from OPT1 to LAN. I had tested it out anyway from OPT1 trying to access the LAN network and I could not get through, which I thought was due to that first rule.
This is only coming up because I don't know how to set up the limiters with multiple LAN networks (basically taking foxale08's original technique to another level.) If anyone has a tutorial on that kind of setup, I would be grateful for a link. I could just be incredibly blind, but I did read through this entire thread and didn't find it here.
-
Make a different queue with the characteristics you want and put the limiters on the pass rule on that interface. There's nothing magic about it.
-
Make a different queue with the characteristics you want and put the limiters on the pass rule on that interface. There's nothing magic about it.
I think I got it now. Thanks for your help Derelict.
-
Hi there, the limiters work, but they always work (also for one client). Its the same as daq wrote in aug 2014, but he did not get an answer to get it working correctly, so I hope that still is possible.
If I set the limiters to 6 Mbps then total available bandwidth (120 Mbps) goes down to 6 Mbps. When I start a second download/speedtest they both go evenly to 3 Mbps.
-
I probably misunderstood the original post, when I input the full bandwidth (120 Mbps) in the limiter it works as expected. I tested it 4 times: 1 user 120, 2 users 60 each, 3 users 40 each and 4 users 30 each. Perfectly evenly divided, great. Thanks for this post!
-
Huh?
How do you want it to work? That's the exact behavior expected.
If you want the first host to get 120Mbps then that's what you set the limiter to. Then you create a child limiter that masks on each source/dest IP address under that. Then you'd get something like:
120
60/60
40/40/40
30/30/30/30
24/24/24/24/24
20/20/20/20/20/20etc