Taming the beasts… aka suricata blueprint
-
@irj972:
Question re floating rules for those who understand the black magic…
For the Pass Floating Rule, did you select both the inbound and outbound interfaces?
-
I duplicated what I had with the PRI1 etc for in & out with same set of interfaces, i.e
Whitelist Source = WAN (& VPN_WAN)
Whitelist Dest = LAN (& VPN_LAN)
pic attached in case it helpsEDIT: I trimmed some info from my setup to reduce noise however I now suspect this omission might have something to do with my problem. I suspect a gateway related issue as I run a VPN and it possibly doesn't know how to route packets originating from the VPN interface….I'll try and verify.
If anyone knows any good resources on how to build good firewall rules Id appreciate the pointer, learning by trial and error is tough going. thx again.
-
About IPv6 blacklisting.
It seems that experts in general have an opinion that blacklisting has not been all that effective in IPv4 world, so may as well just abandon it for IPv6, and concentrate on other tactics instead.
Say we still want to blacklist, I think we are going to quit blacklisting individual addresses all together, and ban entire /64 or /42 outright.
Has anyone dabbled in IPv6 blacklisting yet? How is the quality of public IPv6 blacklists? -
@G.D.:
About IPv6 blacklisting.
It seems that experts in general have an opinion that blacklisting has not been all that effective in IPv4 world, so may as well just abandon it for IPv6, and concentrate on other tactics instead.
Say we still want to blacklist, I think we are going to quit blacklisting individual addresses all together, and ban entire /64 or /42 outright.
Has anyone dabbled in IPv6 blacklisting yet? How is the quality of public IPv6 blacklists?Have yet to be hit by anything "massive" in IPv6, so I'll just speculate.
IPv6 /64s are the equivalent of IPv4 /24s. An attacker can "move" within that subnet (dynamic IP, or just fast changing static IPs), and as long as he moves, the blacklisting has already gone to hell. I understand that an attacker can also move to a different subnet, but the same thing applies.
The way I currently do it, is if I see a single IP from a subnet, I block it (already taken care by suricata). If I see multiple IPs blocked from that subnet, I block the entire subnet and alert their upstream. If I'm ignored (suricata still firing up alerts), I look into the subnet more. What country does it come from? Does the upstream already have a different subnet on my permanently banned lists? How long (going back a few years) is that subnet being "naughty"? Did the same ISP have any other interesting subnet (again, going back a few years)? Depending on the answer to those questions and the severity of the traffic, The entire subnet is added to my permanently banned lists.
That's why I strongly disagree with using every single list that's out there. Use the bare essentials, weed out 90% of the traffic, let suricata weed out another 5% and only deal with the 5% that remains. Where it becomes difficult to keep it all together is hosts on the permanently banned list still causing alerts in suricata. There is a way to ignore them using a passlist, but it's not ideal. What I really need to do is ignore the packets originating from a predefined alias and not send them to suricata, but that a different story.
I think that the way IPv6 will be handled will be similar to what I described above. The bad thing about blocking entire subnets are the "innocent" people that go down with them. I consider them as collateral damage. In practice (as described in this guide) the way to perfect (and yes there IS perfect) network security is approaching it in a layered way. And anyone already typing "but eventually you will make a mistake" stop typing and read on.
By layered I mean:
- build a trench around your castle. You have already stopped the horsemen.
- build high strong walls, with sentries and hot oil. You have already stopped the heavy infantry.
- build a strong gate. You have already stopped the rams.
What about catapults? What about them setting your gate on fire? Setting the gate on fire is easily circumvented. Get an iron gate. Catapults? You should already know that catapults are coming, and you should have already sabotaged them. What good are spies if you don't use them?
No I didn't go off in a tangent. Dealing with portscanners shouldn't be your priority. That should already be handled by suricata/pfsense. If the person doesn't know what's running on your hosts, then he can't just guess "ah, they are running ubuntu, let's launch this exploit hoping it will work". How many of your webservers are advertising the server's version in the response? Why not just say "webserver"? The attacker is already trying to find vulnerabilities to your network by interacting with the server. Does your server respond in a way that will alert suricata that host X is poking around, resulting in automatic banning? Does the host handle the automatic banning itself, then push the blocked host to your router? Is the software running on the server properly updated, or is it a 13 year old software (sidenote: checked a long list of hosts the other day (not mine) and some of them were running 13 year old software!!!). If it's not updated, why is it not updated? If it is updated, are there any other mitigations enabled? Is the underlying OS hardened in any way? Or is it happily running a completely up to date X ftp server, but also happily refusing logins without enforcing maximum retries? Is every ssh client connected jailed? Is every single php script running in the same pool? Are basedir restrictions being applied, or can the attacker simply upload a script anywhere and execute it? Does the webserver have access to the same memory that the ssh server uses? Can it write to it?
If you did your absolute best to secure a network and somebody managed to get in, you didn't make a mistake. You simply didn't try hard enough.
Using lists+pfsense+suricata you have already cut down the bad traffic by a LARGE amount. Don't simply sit idling and watch the rest of the traffic go through. Did something new pop up in the logs? Investigate it and don't ignore it.
MLS systems aren't simply installing SELinux and setting it to MLS and boasting "Hey, look at me, I'm running an MLS system!". NO. MLS systems are systems that were designed to separate security into different layers, and each layer dealing with something as efficiently as possible.
As long as the systems are capable of protecting themselves, you shouldn't need to worry about mass blocking hosts. That will naturally come on its own. What this does is add yet another layer to the security: Security through obscurity. Note to industry leaders: FOR THE MILLIONTH TIME: SECURITY THROUGH OBSCURITY IS REAL SECURITY.
Now, if there was a way to keep track of the blocked hosts and identify subnets that could be automatically banned…I'm looking at you BBcan177 ;) (same thing the script does, but using pfsense's snort2c table)
Reaction after previewing the post: "When did I type all that?"
-
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
-
Port is wrong on your aliases should be: https://127.0.0.1:443/badips/AbusePalevo.txt not 43
Something looks off on the directory to…
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
-
Port is wrong on your aliases should be: https://127.0.0.1:443/badips/AbusePalevo.txt not 43
Something looks off on the directory to…
Here is where my de-duplicated lists are:
$ ls /usr/local/www/badips ALIENVAULT.txt AbusePalevo.txt AbuseSpyeye.txt AbuseZeus.txt Atlas_Attacks.txt Atlas_Botnets.txt Atlas_Fastflux.txt Atlas_Phishing.txt Atlas_SSH.txt Atlas_Scans.txt Blut_TOR.txt CIArmy.txt DRG_SSH.txt DRG_VNC.txt DRG_http.txt DangerRulez.txt ET_Comp.txt ET_TOR.txt Feodo_Bad.txt Feodo_Block.txt Geopsy.txt IBlock_BT_FS.txt IBlock_BT_Hijack.txt IBlock_BT_Spy.txt IBlock_BT_Web.txt IBlock_Badpeer.txt IBlock_Onion.txt Infiltrated.txt MDL.txt MalwareGroup.txt NOThink_BL.txt NOThink_Malware.txt NOThink_SSH.txt OpenBL.txt SRI_Attackers.txt SRI_CC.txt Shunlist.txt Spamhaus_drop.txt Spamhaus_edrop.txt VMX.txt WatchGuard.txt dShield_Block.txt dShield_Top.txt malc0de.txt
Ok so it should really be "https://127.0.0.1:443/usr/local/www/badips/AbusePalevo.txt"?
When done correctly shouldn't the IPs show up in the lists' Values in Firewall>>>> Aliases?
Also, would pfBlocker.widget.php_PATCH and diag_dns.php_PATCH work with 2.1.5?
When i try to test the patches i get this error:
Output of full patch apply test: /usr/bin/patch --directory=/usr/local/www/ -t -p1 -i /var/patches/5401274d202ba.patch --check --forward --ignore-whitespace Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- /usr/local/www/widgets/widgets/pfBlocker.widget.php 2014-06-28 13:11:18.000000000 -0400 |+++ /usr/local/www/widgets/widgets/pfBlocker.widget.php 2014-06-28 13:06:55.000000000 -0400 -------------------------- No file to patch. Skipping... Hunk #1 ignored at 2. Hunk #2 ignored at 29. Hunk #3 ignored at 39. Hunk #4 ignored at 53. Hunk #5 ignored at 61. Hunk #6 ignored at 84. Hunk #7 ignored at 92. 7 out of 7 hunks ignored--saving rejects to usr/local/www/widgets/widgets/pfBlocker.widget.php.rej done
Output of full patch apply test: /usr/bin/patch --directory=/usr/local/www/ -t -p1 -i /var/patches/5401259cb1b0c.patch --check --forward --ignore-whitespace Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- /usr/local/www/diag_dns.php 2014-06-23 14:22:26.000000000 -0400 |+++ /usr/local/www/diag_dns.php 2014-06-23 14:22:02.000000000 -0400 -------------------------- No file to patch. Skipping... Hunk #1 ignored at 114. Hunk #2 ignored at 158. Hunk #3 ignored at 179. Hunk #4 ignored at 276. 4 out of 4 hunks ignored--saving rejects to usr/local/www/diag_dns.php.rej done
-
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
How did you configure the Alias Table? It Should be a "URL Table".
I would also recommend just using the existing IR_ Tables that the Script already created. This saves you the effort of making dozens of Alias Tables.
/usr/local/www/aliastables/IR_MAIL
/usr/local/www/aliastables/IR_PRI1
/usr/local/www/aliastables/IR_IB
/usr/local/www/aliastables/IR_SEC1
/usr/local/www/aliastables/IR_PRI2
/usr/local/www/aliastables/IR_SEC3
/usr/local/www/aliastables/IR_TOR
/usr/local/www/aliastables/IR_SEC2
/usr/local/www/aliastables/IR_PRI3 -
@G.D.:
About IPv6 blacklisting.
It seems that experts in general have an opinion that blacklisting has not been all that effective in IPv4 world, so may as well just abandon it for IPv6, and concentrate on other tactics instead.
Say we still want to blacklist, I think we are going to quit blacklisting individual addresses all together, and ban entire /64 or /42 outright.
Has anyone dabbled in IPv6 blacklisting yet? How is the quality of public IPv6 blacklists?I haven't even started to look at IPv6. I found this old link on Spamhaus website
http://www.spamhaus.org/organization/statement/012/spamhaus-ipv6-blocklists-strategy-statement
I don't know of any IPv6 Blocklists? If anyone has time to research, please forward to the group and we can begin to work out a process to include those.
@jflsakfja:
Now, if there was a way to keep track of the blocked hosts and identify subnets that could be automatically banned…I'm looking at you BBcan177 ;) (same thing the script does, but using pfsense's snort2c table)
In regards to the snort2c file, I think keeping track of Repeated Offenders is a great Idea. This should also involve the pfSense Firewall Blocks also.
I have been working away at getting a Beta of pfBlocker that incorporates my script and some other new features. I believe that I am 90% there. Would be nice to get help from the pfBlocker developers but they seem to have no interest in supporting a new release of pfBlocker. I am not even sure if the Devs want or will support it when I get it finalized. It may end up being a new package…..
A few members have been helping to beta test the package and that is helping weed out issues. If anyone has real time to spare, drop me a PM if you have interest in helping BETA test. Thanks to Cino and wcrowder for their support!! ;) ;)
-
Also, would pfBlocker.widget.php_PATCH and diag_dns.php_PATCH work with 2.1.5?
I would assume that there are changes to those files in 2.1.5, so I will need to look at it and write a patch.
From looking at 2.2 and from the 2.1.5 release notes, I think they removed the ability to Resolve External IP lookups. If thats the case, not sure if we should add the functionality manually or leave it alone? Other possibility is to add a new Log in pfBlocker just for the Blocklists and Resolve from there. This way there is no mucking around with pfSense System Files.
- from the 2.1.5 Release Notes:
Remove javascript alert DNS resolution action from the firewall log
view. It was already removed from 2.2, and it's better not to allow a
GET action to perform that action -
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
How did you configure the Alias Table? It Should be a "URL Table".
I would also recommend just using the existing IR_ Tables that the Script already created. This saves you the effort of making dozens of Alias Tables.
/usr/local/www/aliastables/IR_MAIL
/usr/local/www/aliastables/IR_PRI1
/usr/local/www/aliastables/IR_IB
/usr/local/www/aliastables/IR_SEC1
/usr/local/www/aliastables/IR_PRI2
/usr/local/www/aliastables/IR_SEC3
/usr/local/www/aliastables/IR_TOR
/usr/local/www/aliastables/IR_SEC2
/usr/local/www/aliastables/IR_PRI3Thanks BBcan177,
I'll try the IR_ Tables instead.
Currently with the script installed and only Suricata configured following the guidelines on this thread my RAM usage is on average in the 80s%.
I was thinking of setting up simple cache (squid proxy) along with it but with my RAM usage so high is there anyway i can still get the most out of using Suricata/pfiprep with the squid cache?
What lists/rules would be fine to disable? (i've followed jflsakfja's guidelines but i still think that there are some lists/rules that may not apply to my usage scenario.)I have a 50mbps connection & my pfSense system is an Athlon 64 3000+ with 1.25GB of RAM and 2 nics (LAN/WAN). That's connected to a switch with a wireless AP, 5 wired clients and possibly over 25 wireless clients.
We are basically a media consuming household (HD streaming,downloads,torrents,skype,teamviewer,facebook etc) and if it was up to me I'd block most of these kinds of traffic but all hell would break loose if i did that, seriously.
Other than my OpenVPN server that i have on my pfSense machine, squid cache, suricata, pfiprep…that's about it.Any rules/lists that you recommend disabling to reduce memory usage?
-
@G.D.:
About IPv6 blacklisting.
It seems that experts in general have an opinion that blacklisting has not been all that effective in IPv4 world, so may as well just abandon it for IPv6, and concentrate on other tactics instead.
Say we still want to blacklist, I think we are going to quit blacklisting individual addresses all together, and ban entire /64 or /42 outright.
Has anyone dabbled in IPv6 blacklisting yet? How is the quality of public IPv6 blacklists?I see that jflsakfja already reply and I'd agree with near all except that I'd recommend blocking on subnet boundaries with IPv6 and here's why. /64 is the smallest advertisable block size for IPv6. In fact that size will likely be assigned to every home/consumer connection. A /56 or /64 will likely be the smallest allocated size and you'll get your own /64 or so for your home etc. Some are even saying that could be a /48 but I doubt that. The trick is that not everyone is a home user and you could have issues with hosted systems and hitting an innocent with a /64 block blacklist entry. There are other interesting characteristics about IPv6 and I think we will see some innovation in security due to it once we have a broad uptake of the address system.
So, blacklisting in IPv6 should near certainly be done at those boundaries or we'd be in hairy hell chasing an effective blocking measure. I'll take that conundrum over IPv4 hell we are in now.
-
Thanks BBcan177,
I'll try the IR_ Tables instead.
Currently with the script installed and only Suricata configured following the guidelines on this thread my RAM usage is on average in the 80s%.
I was thinking of setting up simple cache (squid proxy) along with it but with my RAM usage so high is there anyway i can still get the most out of using Suricata/pfiprep with the squid cache?
What lists/rules would be fine to disable? (i've followed jflsakfja's guidelines but i still think that there are some lists/rules that may not apply to my usage scenario.)I have a 50mbps connection & my pfSense system is an Athlon 64 3000+ with 1.25GB of RAM and 2 nics (LAN/WAN). That's connected to a switch with a wireless AP, 5 wired clients and possibly over 25 wireless clients.
We are basically a media consuming household (HD streaming,downloads,torrents,skype,teamviewer,facebook etc) and if it was up to me I'd block most of these kinds of traffic but all hell would break loose if i did that, seriously.
Other than my OpenVPN server that i have on my pfSense machine, squid cache, suricata, pfiprep…that's about it.Any rules/lists that you recommend disabling to reduce memory usage?
I'm seeing 22-25% of 4GB RAM usage on systems configured to the T using this guide, so your 80% of 1.25GB seems about right. 1.25 seems wrong though. What's that, 1GB+256MB stick, or 2x512MB+256MB stick? In either case, getting more RAM is what I would do. 1GB shouldn't be more than $20. If you are using 1GB+256MB just get another GB and replace the 256MB module. Since it takes about 1GB for each system configured according to this guide, you should drop down to 50% RAM, which should give you plenty of room to use other things if you need to.
-
@jflsakfja:
Thanks BBcan177,
I'll try the IR_ Tables instead.
Currently with the script installed and only Suricata configured following the guidelines on this thread my RAM usage is on average in the 80s%.
I was thinking of setting up simple cache (squid proxy) along with it but with my RAM usage so high is there anyway i can still get the most out of using Suricata/pfiprep with the squid cache?
What lists/rules would be fine to disable? (i've followed jflsakfja's guidelines but i still think that there are some lists/rules that may not apply to my usage scenario.)I have a 50mbps connection & my pfSense system is an Athlon 64 3000+ with 1.25GB of RAM and 2 nics (LAN/WAN). That's connected to a switch with a wireless AP, 5 wired clients and possibly over 25 wireless clients.
We are basically a media consuming household (HD streaming,downloads,torrents,skype,teamviewer,facebook etc) and if it was up to me I'd block most of these kinds of traffic but all hell would break loose if i did that, seriously.
Other than my OpenVPN server that i have on my pfSense machine, squid cache, suricata, pfiprep…that's about it.Any rules/lists that you recommend disabling to reduce memory usage?
I'm seeing 22-25% of 4GB RAM usage on systems configured to the T using this guide, so your 80% of 1.25GB seems about right. 1.25 seems wrong though. What's that, 1GB+256MB stick, or 2x512MB+256MB stick? In either case, getting more RAM is what I would do. 1GB shouldn't be more than $20. If you are using 1GB+256MB just get another GB and replace the 256MB module. Since it takes about 1GB for each system configured according to this guide, you should drop down to 50% RAM, which should give you plenty of room to use other things if you need to.
Thanks for the reply.
You're correct it's 1GB+256MB stick and usage is around 83% of that most of the time and if that's average use for the guide then I'll invest in another 1G of RAM. Security is more important than bandwidth/speed ;D
-
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
How did you configure the Alias Table? It Should be a "URL Table".
I would also recommend just using the existing IR_ Tables that the Script already created. This saves you the effort of making dozens of Alias Tables.
/usr/local/www/aliastables/IR_MAIL
/usr/local/www/aliastables/IR_PRI1
/usr/local/www/aliastables/IR_IB
/usr/local/www/aliastables/IR_SEC1
/usr/local/www/aliastables/IR_PRI2
/usr/local/www/aliastables/IR_SEC3
/usr/local/www/aliastables/IR_TOR
/usr/local/www/aliastables/IR_SEC2
/usr/local/www/aliastables/IR_PRI3Followed your suggestion and changed the URL Tables and it seems to be working (the IPs show up when i mouse over the Aliases) but i still get syslog errors.
Aug 30 09:40:26 php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/IR_SEC2.txt.tmp' 'https://127.0.0.1:443/usr/local/www/aliastables/IR_SEC2'' returned exit code '1', the output was 'fetch: https://127.0.0.1:443/usr/local/www/aliastables/IR_SEC2: Operation timed out'
Is this something to be concerned about?
pfIP Rep widget shows last update was Aug 30 09:43 with status arrow green (up) though…
-
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
I don't understand where the .tmp extensions comes from? It should end with .txt?
If you are using the IR_ Alias URL Tables, I would remove URL Tables that are referencing the individual files as shown above. I would also delete these individual aliases in /var/db/aliastables
Hope that Helps.
-
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
I don't understand where the .tmp extensions comes from? It should end with .txt?
If you are using the IR_ Alias URL Tables, I would remove URL Tables that are referencing the individual files as shown above. I would also delete these individual aliases in /var/db/aliastables
Hope that Helps.
Hey BBCan177!
i figured it out.
- "Operation timed out" was caused by the port being blocked. i had previously created a custom port and disabled anti-lockout so entering my port number 66 allowed the fetch command to access the list directory.
- Even though it passed using the custom port i still had the fetch error "not found".
i confirmed my directory was correct which was "/usr/local/www/aliastables"
$ ls /usr/local/www/aliastables IR_CC IR_IB IR_Match IR_PRI1 IR_PRI2 IR_SEC1 IR_SEC2 IR_TOR
It seems like the directory was already defaulted to "/usr/local/www/" when adding the URL tables so the shortened directory address of "https://127.0.0.1:66/aliastables/IR_SEC2" for example worked for all the IR Lists.
-
Yah, to get back to my issue for future reference.
I'll give the example with Google again. The particular problem was for a client that cannot afford to miss certain mails from some senders, including several other connections both in & out. (long story)I present this magnificent snip-it story.
Either a manual alias is created, or a cron job to whois a company into a list. Lets take Google.cron job creating the list :
List used in floating rules (disabled here, but you get the idea):
Detailled :
If i enable that rule => Because every Gmail server is matched in that rule, the package is matched, rule is applied (pass it) and thats it.
My SMTP NAT rule does .. absolutely nothing. Its very easy to test in this case (enable & send mail = nothing, disable = instant mail received).
Disabeling the quick option works offcourse. But if an IP i want to whitelist is ever blacklisted (i'm not saying it is, or will be any time soon, does not matter as its a direct request from the direction there) i'm screwed :).
Since the blocklist beneath it will match, and will block.I comfirmed the issue with 2 setups, both are completely ignoring NAT after a floating rule match.
OR
I'm completely wrong, and being the fact it is whitelisted first (like normal interfaces rules - priority ruling) is enough? No quick option.For websites, DNS requests and the bunch its not an issue. Just where a forwarding rule has to be applied after seems to be impossible with floating rules.
@BBcan17
Haven't checked your updates yet, but since 2.1.5 altered DNS whois the DNS patch to include the lists doesn't show them anymore. Only updated 1 system thus far, so no idea if its a bug or not.edit
woud be awesome if you could make a rule with "all ports except xyz". Just like its possible with hosts / networks. -
Don't see anything wrong with those, but then again not much is given.
- show the popup for the alias to see the IPs
- show the complete rules (just change public IPs to something obviously public, eg 1.1.1.1 and leave private IPs as private).
- show your floating rules tab
- Run a packet capture on the ingress + egress interfaces and post the redacted (just replace IPs to understand if it's going public or private) text. No need for a full transcript, "Normal" detail will do, just to see how the packets are flowing.
I have traffic shaping rules limiting speeds to public IPs, which are in turn NATed to private ones and everything works perfectly, so as far as I can tell nothing gets ignored. And yes, that does apply to both inbound and outbound traffic.
-
1.
list goes one quite a bit ;). Its not a problem its not loading or empty.2.
forwarding to alias (IP : 10.0.0.2 == internal exchange server)
The rule is applied on both WAN interfaces. Only MX record active on the VDSL interface in question (failover not yet active).
3.
Nothing but whitelist rule, rest is blocklist blocking.
The 2 blocking rules beneath whitelist are to isolate a seperate physical network to acces the main network. (basically a block all from interface x to y). Then a rule to lock down internet acces during the night for the second network.
4.
capturing WAN when rule is applied gives me :00:47:40.700800 IP 209.85.220.177.38924 > <wanip>.25: tcp 0 00:47:41.699405 IP 209.85.220.177.38924 > <wanip>.25: tcp 0 00:47:43.699672 IP 209.85.220.177.38924 > <wanip>.25: tcp 0 00:47:47.699690 IP 209.85.220.177.38924 > <wanip>.25: tcp 0</wanip></wanip></wanip></wanip>
etc
disabling floating whitelist Google rule : (recaptured, the previous IP was another sender)
00:42:47.405562 IP 209.85.220.181.42122 > <wanip>.25: tcp 0 00:42:47.405847 IP <wanip>.25 > 209.85.220.181.42122: tcp 0 00:42:47.522800 IP 209.85.220.181.42122 > <wanip>.25: tcp 0 00:42:47.523516 IP <wanip>.25 > 209.85.220.181.42122: tcp 100 00:42:47.641067 IP 209.85.220.181.42122 > <wanip>.25: tcp 0 00:42:47.641599 IP 209.85.220.181.42122 > <wanip>.25: tcp 31 00:42:47.641865 IP <wanip>.25 > 209.85.220.181.42122: tcp 191 00:42:47.759789 IP 209.85.220.181.42122 > <wanip>.25: tcp 10 00:42:47.760055 IP <wanip>.25 > 209.85.220.181.42122: tcp 29 00:42:47.878097 IP 209.85.220.181.42122 > <wanip>.25: tcp 186 00:42:47.878731 IP <wanip>.25 > 209.85.220.181.42122: tcp 1418 00:42:47.878768 IP <wanip>.25 > 209.85.220.181.42122: tcp 54 00:42:47.998797 IP 209.85.220.181.42122 > <wanip>.25: tcp 0</wanip></wanip></wanip></wanip></wanip></wanip></wanip></wanip></wanip></wanip></wanip></wanip></wanip>
And the 3 test mails I send during the floating rule active get received 5 mins later (re-attempt by gmail server).
Internal :
rule on :22:51:47.917275 IP 209.85.220.174.48733 > 10.0.0.2.25: tcp 0 22:51:47.917442 IP 10.0.0.2.25 > 209.85.220.174.48733: tcp 0 22:51:48.917181 IP 209.85.220.174.48733 > 10.0.0.2.25: tcp 0 22:51:50.917183 IP 209.85.220.174.48733 > 10.0.0.2.25: tcp 0 22:51:50.937085 IP 10.0.0.2.25 > 209.85.220.174.48733: tcp 0 22:51:54.917219 IP 209.85.220.174.48733 > 10.0.0.2.25: tcp 0 22:51:56.943003 IP 10.0.0.2.25 > 209.85.220.174.48733: tcp 0 22:52:02.917255 IP 209.85.220.174.48733 > 10.0.0.2.25: tcp 0 22:52:08.939232 IP 10.0.0.2.25 > 209.85.220.174.48733: tcp 0
209.85.220.174 is a Gmail server IP, and is present in the Google alias.
rule of : (capture running before turning it back off)
22:53:43.113162 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 0 22:53:43.113340 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 0 22:53:43.230793 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 0 22:53:43.231483 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 100 22:53:43.349044 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 0 22:53:43.349547 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 31 22:53:43.349829 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 191 22:53:43.467508 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 10 22:53:43.467741 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 29 22:53:43.589822 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 186 22:53:43.590172 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 1418 22:53:43.590213 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 54 22:53:43.710304 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 0 22:53:43.710853 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 326 22:53:43.723228 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 59 22:53:43.840808 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 69 22:53:43.841143 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 213 22:53:43.959329 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 85 22:53:43.959789 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 53 22:53:44.077556 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 69 22:53:44.080321 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 53 22:53:44.199096 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 1418 22:53:44.199126 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 11 22:53:44.199158 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 165 22:53:44.199278 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 0 22:53:44.632042 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 149 22:53:44.750021 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 37 22:53:44.750129 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 0 22:53:44.750235 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 0 22:53:44.750300 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 85 22:53:44.750437 IP 10.0.0.2.25 > 209.85.220.181.33212: tcp 0 22:53:44.867823 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 0 22:53:44.867919 IP 209.85.220.181.33212 > 10.0.0.2.25: tcp 0
test mail instant received, previous test mail 5 mins later.
So yea..- its not that it is such a disaster (I can find ways around it tbh). But I cannot stand not knowing why this is happening.
"edit" - wrong IP in first capture.