Traffic shape all traffic to facebook - possible?
I would just like to know if it's possible to traffic shape all traffic to facebook. I've been using traffic shaper befor, but there is no option for limiting bandwidth to a particular site, in this case facebook. Is this possible?
It isn't possible to shape by URL.
If you knew all of the IPs for facebook (there are lists out there) you could do it by IP, like shaping anything else at that point.
Something like this would be quite useful in todays clouded environment. It would be nice to squelch all traffic to and/or from specific domains regardless of IP since IP's can change or additional ones made active.
Except there is no way to obtain that information "by domain" except by DNS, which can be inaccurate.
On 2.0 you can make an alias that includes a hostname, which the filterdns daemon will periodically update. You can use an alias in the penalty box, so you can sort of accomplish this now. I'm not sure I would rely on its accuracy though.
I agree, DNS can sometimes not be accurate. However, if you're an admin trying to keep your users happy by allowing them to these sites but at the same time not allowing them to eat all of the bandwidth, DNS accuracy to a cloud service may not be that important especially if it's a temporary DNS accuracy problem. Using DNS is about the only way you can do this quickly. Who wants to keep an IP address list accurately for some of these giant cloud services when you could just limit rate to/from *.youtube.com?
You can do these kind of tricks in Cisco land by using a class-map and policy-map but not everyone has Cisco gear or config access to the telcos Cisco gear.
Cisco class-map, if I recall correctly, can use a url filter, which is probably what you're thinking of. It would still be inaccurate, though, especially if the CDN a place like Facebook uses isn't on facebook.com (see other things like fbcdn.net) or uses https.
There isn't a silver bullet to know for sure that a given connection is really going to a specific domain.
So are you saying something like what we are discussing here wouldn't be reliable +90% of the time under normal operating conditions? Of course there's no perfect solution. How many perfect solutions are there in IT? There's many more compromised solutions than perfect ones. If something like this worked even +80% of the time I could accept that and I bet a lot of other admins could too. For example email is not 100% reliable but it's certainly useful.
I know about the other domains that facebook uses and that could be remedied, for the most part, by the same kind of function. It would certainly be a lot easier than and less time consuming than maintaining a list of IP addresses for each one of those monsters. Could I make an alias for the IP's of these domains? Sure I could. Do I want to maintain that list? No, I don't have the time.
I feel like this would be a fantastic feature and I bet, as more clouds come online, you will see more questions like the OP. That's all I'm saying.
Well I'm saying there's nothing to add - you can do it that way now in 2.0 if you (a) know the IP block for the network, and/or (b) know the hostnames for DNS resolution. So if you can accept the potential unreliability of DNS getting all the IPs you want, you can do that with the current code.
you can use layer7 for this.
Just create a regex for facebook and map it to the queue pipe you want.
The only problem is you cannot filter on https and that's most of their traffic now days.