Hows Google getting past my alias lists?
-
That's pretty cool. :) Even if not perfect it going to stick a sufficiently large spanner in most peoples browsing they'll probably give up and do something useful instead.
Edit: Ooops. Referring to BBcan177's suggestion.
Steve
-
This is EXACTLY why Layer7 is needed in pfSense….
Unless we are talking https traffic, then you need some sort of domain blacklist or IP range as BBcan17 is saying...
-
This is EXACTLY why Layer7 is needed in pfSense….
Unless we are talking https traffic, then you need some sort of domain blacklist or IP range as BBcan17 is saying...
almost all popular webservices use https … layer7 won't help, nor will squid without being an evil admin.
-
Have to say, if you do block all of googles ip addresses, you'll find most web pages painfully slow to load as it takes the browser a long time to timeout when its pulling code or stuff from google.
When a webpage has something to do with google beit using their ajax, google apis or what have you, a webpage can easily take a few minutes to load destrying any user experience, the delay depends on how many links to google there are, the more links the longer the delay.
If Google went off line, you'd have a riot on your hands with people getting fed up waiting for web pages to load and lots of businesses could suffer as well, but their techniques to make their services work irrespective of how badly configured a network maybe is quite illuminating when considering ways to tackle malware thats not been identified by any major AV company.
I've yet to analyse how many websites use google services of sorts, but I suspect Western countries are very heavy users of their services compared to other parts of the world not neccesarily english speaking.
Referring to BBcan177's suggestion, it looks like mismatches will occur as the sources might not be up to date with whatever resolver might have got from the various gTLD servers.
Still havent figured out how this came to be though.
Feb 2 20:06:30 unbound: [63509:0] info: response for pfmechanics.com.MyDomainNameWhichWillRemainPrivate. A INI've not created a "pfmechanics.com" subdomain for my domains so trying to find out why pfsense is doing this? Any ideas?
TIA
-
Alternatively you could block this at DNS level on the pfSense itself.
Create an NAT rule on your LAN interface with destination any, port 53. Redirect destination 127.0.0.1, port 53.
–> This will force all DNS lookups no matter to which DNS server to your pfSense.In the DNS forwarder config you can add something like:
address=/google.com/127.0.0.1This will resolve for all google.com domains and subdomains to 127.0.0.1.
Replace 127.0.0.1 with a local server and you will see on it when something is sent to it.See also: https://doc.pfsense.org/index.php/Wildcard_Records_in_DNS_Forwarder
-
Have to say, if you do block all of googles ip addresses, you'll find most web pages painfully slow to load as it takes the browser a long time to timeout when its pulling code or stuff from google.
It's for this reason that most popular adblockers replace the requests with some locally served data instead.
Steve
-
I like GruensFroeschli's suggestion. I'd consider it the default method of controlling DNS queries because It doesn't even matter if people on your net manually configure alternate DNS servers on their machines, it will still only get pfsense DNS. (unbound or whatever you are running)
-
Something like this??
-
See also: https://doc.pfsense.org/index.php/Wildcard_Records_in_DNS_Forwarder
Thanks for highlighting this! I'm using the resolver at the moment as its now the default dns method in pfsense2.2, but I'll see if I can use what you have highlighted in the resolver someway as it seems like the windows lmhost/host file trick but running on pfsense.
-
Something like this??
You need to set the interface to the one on which the DNS requests arrive.
In most cases this is the LAN interface or whatever your clients are connected to.See attached image.