Taming the beasts… aka suricata blueprint
-
Since wetspark's post, I've been meaning to ask all how things are going if you followed this guide.
@ALL Is everything working like it should? Does it get the job done? How stable was the resulting system?
If anyone has anything to add, please do so. The more discussion that goes into pfsense/suricata/lists the better for us all ;D
EDIT: In other words, was the guide successful in taming the beasts? :D
-
And this is what comes out of the guide:
-
@jflsakfja:
Since wetspark's post, I've been meaning to ask all how things are going if you followed this guide.
@ALL Is everything working like it should? Does it get the job done? How stable was the resulting system?
If anyone has anything to add, please do so. The more discussion that goes into pfsense/suricata/lists the better for us all ;D
EDIT: In other words, was the guide successful in taming the beasts? :D
I was away for a while, so sorry for not responding sooner :-*
It is very stable. Your guide about the firewall rules shows a lot of things being blocked albeit the internet experience doesn't suffer from it (e.g. websites can still be accessed, WIFE doesn't complain although I see numerous blocks from her LAN-IP out to the internets).
BB's lists also works very nice, although there are quite some false positives in it –- although BB can't be blamed for that, of course not.
I can not yet comment on the Suricata matters, as I currently have Snort:
1. I wanted to move over to Suricata but your rules instructions are massive, and the Suricate GUI doesn't allow me to fix the rule instructions relatively fast.
2. The great Bill anounced some modifications to the GUI, I don't know where that currently is at, because:
2A. I had to uninstall Suricata and some other packages since one of them was making my box crash - and I don't know which one it was nor did I have the time to find out;
2B. I was away, busy, playing 'Zorro' for some people out there ;DSo:
1. Firewall rules seem very useful;
2. BB's script is an artwork that performs very well too;
3. I am dying to dive into Suricata but I need to catch up to where it is at, avoiding the repeated crashed of some weeks ago.In the end:
Great JFL;
Great BB;
Great Bill;
Great mysterious man who refuses me to buy him a coffee;
Great pfSense team;
Great many other helpful members;
Not so great few members ( ;D ;D ;D ;D ;D ). -
@jflsakfja
Can that beast ever really be tamed? But your posts have went along way taming them for my meager home network. With kids and all my toys, having this king of control over that beast is assume. Thank you for your time putting this all to together.
-
I just found an issue and would like some input (Really need a vacation soon..).
So, in regards to the block-lists, I use floating rules instead of interface specific rules. (pfblocker)
All good and well, be it I use a couple of white-lists also (generated by a whois cron job, or just plain manual).For example : I have a Google white-list to allow both gmail, google dns, the bunch. General floating rules for both directions.
BUT, if I apply the "Apply the action immediately on match." my NAT rules are completely ignored. So forget forwarding a match in the whitelist floating rule to an exchange server internally.Causing the very nice result of not a single Gmail message getting delivered anymore.
Using the white-list in the specific interfaces will have no use, since the block lists floating rules will always take priority.
So, any other solution except for changing all floating rules to interface specifics? Be it manually or pfblocker.*edit
to be specific. I just want to whitelist the whole list I create. Not start with specific port ranges. -
I just found an issue and would like some input (Really need a vacation soon..).
So, in regards to the block-lists, I use floating rules instead of interface specific rules. (pfblocker)
All good and well, be it I use a couple of white-lists also (generated by a whois cron job, or just plain manual).For example : I have a Google white-list to allow both gmail, google dns, the bunch. General floating rules for both directions.
BUT, if I apply the "Apply the action immediately on match." my NAT rules are completely ignored. So forget forwarding a match in the whitelist floating rule to an exchange server internally.Causing the very nice result of not a single Gmail message getting delivered anymore.
Using the white-list in the specific interfaces will have no use, since the block lists floating rules will always take priority.
So, any other solution except for changing all floating rules to interface specifics? Be it manually or pfblocker.*edit
to be specific. I just want to whitelist the whole list I create. Not start with specific port ranges.Can you please post your rules? It's not very clear what's happening. Are you using pfblocker as aliases + floating rules?
NAT rules shouldn't be just ignored. Think of NAT as the "final" stepping stone. NAT should be the last thing that a packet sees coming out of an interface. Even if it is explicitly passed, it only goes up to the interface if it's going to be NATed. If the interface has any NATing applied to it, that packet must follow that.
-
jflsakfja,
I see the pull request has gone in for Suricata 2. Plan to start a new thread and refresh the beast taming blueprint with updates or just update this thread?
I'm excited to get v2 for sure. Besides the core areas of improvement, the sig management and log output abilities are going to be great!
-
A lot of changes are coming, and that will take a while to properly test and document them. What I'm planning to do is a separate thread where we all discuss the useless rules, and come up with bare minimum disablesid files that (hopefully) everybody that installs suricata/snort will use. Imagine not having to go through my list and disable all those rules ;)
I also plan to post instructions on tweaking suricata preprocessor settings, and maybe get The Company to release our permanently banned list. We just recently peaked at nearly 15K suricata banned hosts, that gives us a whole lot of intelligence on certain IP ranges (interesting fact: Microsoft (yes THAT Microsoft) has a subnet dedicated to remotely scanning hosts). This list already contains a whole lot of hosts, and I'm currently working on figuring out a way to get suricata to ignore the packets from that list, so that we don't waste processing on those (the hosts are already blocked by pfsense, but the copy of the packets still has to pass through suricata and get processed). There is a way to set the packet "forwarder" to ignore certain packets, but I didn't have the time to mess with it. Keeping track of a 4 million IP list takes a lot of head scratching :o (we have a /11 in that list).
There is also an extremely thin chance of The Company actually releasing our custom rules. The only thing stopping us from doing so isn't that they are proprietary, it's the fact that the vast majority of so called "security experts" will be out of job if we do release them. We had to clear up our blocked suricata hosts recently (IPv6 bug) and within an hour they had already added 200 blocked hosts.
So many things to do, so few years to do them in. I envy elves for living a few centuries.
-
A copy+paste from the list:
emerging-ftp > all except:
2101377 GPL FTP wu-ftp bad file completion attempt <<< breaks filezilla
2101378 GPL FTP wu-ftp bad file completion attempt with brace <<< breaks filezilla
A copy paste from ET's RSS:[–-] Removed rules: [–-]
…snipped...
2101377 – GPL FTP wu-ftp bad file completion attempt (ftp.rules)
2101378 – GPL FTP wu-ftp bad file completion attempt with brace (ftp.rules)Someone actually listened to me? It brings tears to my eyes :'(. Now go do the same for the rest of the rules ;D
EDIT: Updated the list and removed those rules to keep it clean
-
While I don't have anything to do with MS, I have done some work on infrastructure for such probe systems for other companies. They are all "Whitehat" but I don't know that I agree with all of it. MS has a significant whitehat probe deployment and they often make the news with botnet take downs due to it.
Realistically, if there was a way to get a consumer friendly version of what's explained in this thread in a box folks could just install at home, the probe networks would have nothing to do. I have wondered if a decent ISP couldn't make a fair profit by deploying managed consumer firewalls based on stuff like this. They all want to give away anti-virus but that's a crap bandaide that has little real impact.
Then I talk to those that have MACs or something and think they are all secure because those companies told them they were since they don't run Windows.
Back on topic. I hope to spend some more quality time with the 2 release and certainly will be following this thread and maybe offering more than peanut gallery commentary.
-
While I don't have anything to do with MS, I have done some work on infrastructure for such probe systems for other companies. They are all "Whitehat" but I don't know that I agree with all of it. MS has a significant whitehat probe deployment and they often make the news with botnet take downs due to it.
Realistically, if there was a way to get a consumer friendly version of what's explained in this thread in a box folks could just install at home, the probe networks would have nothing to do. I have wondered if a decent ISP couldn't make a fair profit by deploying managed consumer firewalls based on stuff like this. They all want to give away anti-virus but that's a crap bandaide that has little real impact.
Then I talk to those that have MACs or something and think they are all secure because those companies told them they were since they don't run Windows.
Back on topic. I hope to spend some more quality time with the 2 release and certainly will be following this thread and maybe offering more than peanut gallery commentary.
Whitehat has its limits, as far as I'm concerned. For example you don't go around scanning every single webserver out there if its vulnerable to a certain exploit. There is nothing whitehat about that. Would you want a stalker documenting your daily life in public for the purpose of "observing how people live their daily lives"? With detailed public notes about when you come home and leave for work? No, what you would want is that your neighbor called you up and said that car with number plates X has parked in front of your house, people got out and had a look peeking through windows taking notes, then took a couple of pictures before getting in the car and going away.
The traffic we've seen from Microsoft is remotely scanning for RDP, among other more "interesting" stuff (eg. privileged>privileged traffic, technically bad traffic). Other than to exploit a vulnerability in RDP, there is absolutely no reason to remotely scan for that. What, is someone trying to tell me Microsoft wants to know how many are running RDP? Or will they contact me if they find I am running RDP on the Internet and tell me I'm a bad boy and slap me on the wrist? Come no guys, defending the moon landing is one thing, defending observed traffic coming from a (barring NSA ties) respectable corporation is another.
I'm sure those hosts have nothing to do with Microsoft and due to certain cosmic events have ended up in one of Microsoft's subnets. I also understand that nobody on the Internet has observed this traffic originating from them, and no person (dead or alive) has alerted them to it. In the end, it's likely due to a number of misconfigured hosts. There is also the high likelyhood that it's spoofed traffic and the attacker has chosen that particular subnet, without an endorsment, support and/or otherwise help from Microsoft Corporation and/or any of its partners.
cough
Whitehat, spoofed or otherwise, if I don't have a use for that particular subnet (no services I need, no services they need) it's getting blocked. And the best thing is I didn't do it. The machine told me to :D
-
I wasn't defending the activity, just mentioning that I'm aware that certain probe networks and infrastructures exists. The stuff you've seen from MS subnets is nearly certain to not be from their probe systems. All of these that I've had experience with are buried in a ton of shell companies and fake registrations to hide the originating sources.
I'd guess what you've seen is either due to hosted/cloud systems, spoofed IPs, or even compromised systems within their networks.
-
In that case they are violating IANA and/or regional registry IP assignment rules. The subnet was DA (Directly Allocated) to Microsoft. That means that ONLY Microsoft can use that subnet, NOT assign it to their customers. For further reassignments, the IPs need to be AP (Assigned Portable). Can I have that IP now that they have a reason to take it away from Microsoft? :P Pretty please with cherry on top?
-
Good luck with that. I've seen large chunks of IPv4 rotting away in allocations for a long time and no one has ever been abel to pry them lose.
-
As far as I can tell, there have been recent cases where regionals did exactly that. They reclaimed wasted IPv4 space, since it's far easier to waste a couple of years legally pursuing a decision in a court to reclaim them, than to go ahead and enable IPv6. Not saying that I'll reclaim IPs from Microsoft, I'm saying that it's not impossible to do it.
-
I'm ready for IPv4 to die really. I took my first IPv6 class in the late 90s and I've been pushing for adoption every since. It's been my experience that FUD has prevented more IPv6 than any technical reason.
-
I'm ready for IPv4 to die really. I took my first IPv6 class in the late 90s and I've been pushing for adoption every since. It's been my experience that FUD has prevented more IPv6 than any technical reason.
What, no +1000 button on this forum? ;)
Exactly. Personally I'm in a position where I'm dealing with the provider side of things. If I don't enable it, ISPs have no reason to enable it. I enable IPv6 for all my clients (where applicable) so at least I'm doing my part for it. It's exactly like you said, there is no technical reason not to enable it, it's just 60 year old sysadmins have a fear for large numbers. It's exactly the same thing as IPv4, but with more characters in an address, nothing more.
Queue the "but what about ISP routers not supporting IPv6, switches, ETHERNET CABLES NOT SUPPORTING IPv6!" crowd. Every single device out there in production use that is lvl2 or lower, can handle IPv6. If anyone disagrees, please file your resignation first thing tomorrow morning. There are a lot of new people out there just dieing to get your job.
-
I do provider infrastructure and lots of them are still on the "Wait and see" fence for IPv6. I've shown them how easy it is to run dual stack and they still just don't get it. I'm a gray beard myself and numbers are my friends. :)
-
I agree with you. Things aren't going to change though, because instead of the regionals doing their part and giving out all the remaining IPs and be done with it, they are hoarding them whispering "My preciooooouuusssss". If there is no IP shortage, you aren't forced to move to IPv6. If all the addresses are taken, and you want to bring up a new network, you are forced to go to IPv6.
All OSes these days are already shipping ready for IPv6. As long as the router understands IPv6, there is NOTHING stopping you from actually using IPv6. Switches? lvl2=>IPv6. WIFI APs? lvl2=>IPv6. Bridged modems? lvl2=>IPv6. Routed modems (like DSL modems)? majority is already able to do it.
Just guessing, but I'd say that 80% of the equipment out there is IPv6 capable, it's just the "sysadmins" (just quoting, if I start calling them names, I'll fill the entire post) can't be bothered with it.
The cycle is this:
Website > ISP > client > ISP > Website
As long as 1 of the 3 has access to new IPv4s, the other 2 will not be bothered to make the transition. What needs to be done is google, facebook and twitter stop using IPv4s and start using IPv6 exclusively. That would trigger a few (million) angry calls from clients, which would force ISPs to be bothered with it. But since we are living in a greed-driven society, that will never happen, and we are back to zero.
-
I agree with you both of you… I work for a major MSO in the states who over the last year has deployed IPv6 to almost every market.. Expect, the customer facing support folks don't officially support it. The norm, the engineers deploy something but forget to let the support know about it. I can't speak for all OSs but Windows 7 prefers IPv6 over IPv4. With that being said, pretty much most of the major sites I go to are routing IPv6: facebook, yahoo, google. I guess in one way, this MSO is trying to get IPv6 out there; at least ready before it is required by all.
-
@Cino: Yeap, seen that too. A certain ISP in Cyprus (don't want to point any fingers :o) has enabled IPv6 internally but is just refusing to enable it on their customer side. I can even tell you their answer if you ask for it: "IPv6?!?! I've spent 30 years in the telecom industry and I've never heard of it. Oh, we are actually using it? Well due to incompatibility with our customer modems, we can't currently offer it to our customers." When escalating to the department manager here is his response: "No, he didn't mean we can't use it, er… I'm sure it was a misunderstanding. It's just that having to change the settings on the modem takes a lot of time. Imagine having to do that for all our customers."
Translating both of their responses to normal talk: "We don't actually talk to other departments because we can't be bothered extending our arm to pick up the phone unless the customer is paying us several hundreds of thousands of €/month".
And yes, it's an actual story, not about IPv6, about them adding an IP to their ACL after 15 months of constant DoS attacks. Complements to the attacker's upstream for gulp going as far as ACTUALLY CHANGING the attacker's IP. Because everyone in the IT industry (from the person wiping the floors, to the CEOs) knows that changing an IP simply fixes the problem. I feel so special for them to finally do this for me! ::)
What actually gets on my nerves is that there are persons with far greater knowledge and will to work than the 60 year olds infesting the IT industry. Persons that are currently starving without a job because the idiots are sworn to occupy the positions until all that's left of them is their skeletons strapped to their chairs. If you do point out this fact, they get mad at you for some reason...
-
LOL, I love the way you write man!!
At the company I work for, we dont have that problem with people stuck to the old ways of doing things in IT…. Usually they end up with a package if they can't adapt to the ever changing corporate borg. Or they find themselves with a handset in Tech Support with Customer Service.
I was having an issue on how the modem was handing out the IPv6 prefix, luckily for me I have contacts that work on that side of the house... Couple hours later my /56 was working great at home... Before doing this tho, I asked the Tech Support Teir 3 folks that handle customer calls first.. Yeah its not supported but there is going to be a pilot test rolling out soon, do you want to be part of it?... I told them, Yeah man, but IPv6 addresses are already being handed out to the modems right now are we speak... He didn't believe until I showed a screenshot of my computer IP settings.... We have some awesome tools that were develop in-house to troubleshoot modem issues but they haven't been updated yet to include IPv6. Wouldn't surprise me if that programmer left the company
-
Another deleted rule 2012688 !!! That's 3 rules out of my list in a week. Someone is being naughty and reading my list ;). Keep it up guys, with this pace we'll go through the entire list by next year's end.
-
Ars Technica has a good article about IPv6 adoption up. We were running IPv6 is labs in the early 2ks and it continues to run heavily in the labs. I can enable services rapidly and easily for IPv6 on ADCs and run dual stack with keeping the backend IPv4 but still, interest remains low.
Good to see some of the rules getting addressed. I need to review the ones I've disabled to determine what's wrong with them. So much stuff to do, so little stuff I get paid to do. :)
-
@jflsakfja:
EDIT!!!! MISSED THE QUICK CHECKBOX. TICK THAT!
Question re floating rules for those who understand the black magic…
I've got my floating rules setup as earlier in this thread, all works fine except I now have a client who hosts his site on GoDaddy which conflicts which an IP in one of the PRI2 lists.
I can fix it easily by disabling the QUICK match on PRI2 rules as I have a whitelist in my LAN interface page which allows this address - but this seems inefficient, to my limited knowledge at least.
I tried to create a PASS/Quick rule in floating page higher up than the PRI2 entries but this doesn't work...it kind of half does but I still get a failed to load page ultimately although not as quickly. I suspect like outbound might work but not the accompanying inbound PASS...although Im way out of my depth here trying to debug it. Do PASS entries work in the floating page and if so, any guidance for a novice?
thx in adv -
@irj972:
Question re floating rules for those who understand the black magic…
For the Pass Floating Rule, did you select both the inbound and outbound interfaces?
-
I duplicated what I had with the PRI1 etc for in & out with same set of interfaces, i.e
Whitelist Source = WAN (& VPN_WAN)
Whitelist Dest = LAN (& VPN_LAN)
pic attached in case it helpsEDIT: I trimmed some info from my setup to reduce noise however I now suspect this omission might have something to do with my problem. I suspect a gateway related issue as I run a VPN and it possibly doesn't know how to route packets originating from the VPN interface….I'll try and verify.
If anyone knows any good resources on how to build good firewall rules Id appreciate the pointer, learning by trial and error is tough going. thx again.
-
About IPv6 blacklisting.
It seems that experts in general have an opinion that blacklisting has not been all that effective in IPv4 world, so may as well just abandon it for IPv6, and concentrate on other tactics instead.
Say we still want to blacklist, I think we are going to quit blacklisting individual addresses all together, and ban entire /64 or /42 outright.
Has anyone dabbled in IPv6 blacklisting yet? How is the quality of public IPv6 blacklists? -
@G.D.:
About IPv6 blacklisting.
It seems that experts in general have an opinion that blacklisting has not been all that effective in IPv4 world, so may as well just abandon it for IPv6, and concentrate on other tactics instead.
Say we still want to blacklist, I think we are going to quit blacklisting individual addresses all together, and ban entire /64 or /42 outright.
Has anyone dabbled in IPv6 blacklisting yet? How is the quality of public IPv6 blacklists?Have yet to be hit by anything "massive" in IPv6, so I'll just speculate.
IPv6 /64s are the equivalent of IPv4 /24s. An attacker can "move" within that subnet (dynamic IP, or just fast changing static IPs), and as long as he moves, the blacklisting has already gone to hell. I understand that an attacker can also move to a different subnet, but the same thing applies.
The way I currently do it, is if I see a single IP from a subnet, I block it (already taken care by suricata). If I see multiple IPs blocked from that subnet, I block the entire subnet and alert their upstream. If I'm ignored (suricata still firing up alerts), I look into the subnet more. What country does it come from? Does the upstream already have a different subnet on my permanently banned lists? How long (going back a few years) is that subnet being "naughty"? Did the same ISP have any other interesting subnet (again, going back a few years)? Depending on the answer to those questions and the severity of the traffic, The entire subnet is added to my permanently banned lists.
That's why I strongly disagree with using every single list that's out there. Use the bare essentials, weed out 90% of the traffic, let suricata weed out another 5% and only deal with the 5% that remains. Where it becomes difficult to keep it all together is hosts on the permanently banned list still causing alerts in suricata. There is a way to ignore them using a passlist, but it's not ideal. What I really need to do is ignore the packets originating from a predefined alias and not send them to suricata, but that a different story.
I think that the way IPv6 will be handled will be similar to what I described above. The bad thing about blocking entire subnets are the "innocent" people that go down with them. I consider them as collateral damage. In practice (as described in this guide) the way to perfect (and yes there IS perfect) network security is approaching it in a layered way. And anyone already typing "but eventually you will make a mistake" stop typing and read on.
By layered I mean:
- build a trench around your castle. You have already stopped the horsemen.
- build high strong walls, with sentries and hot oil. You have already stopped the heavy infantry.
- build a strong gate. You have already stopped the rams.
What about catapults? What about them setting your gate on fire? Setting the gate on fire is easily circumvented. Get an iron gate. Catapults? You should already know that catapults are coming, and you should have already sabotaged them. What good are spies if you don't use them?
No I didn't go off in a tangent. Dealing with portscanners shouldn't be your priority. That should already be handled by suricata/pfsense. If the person doesn't know what's running on your hosts, then he can't just guess "ah, they are running ubuntu, let's launch this exploit hoping it will work". How many of your webservers are advertising the server's version in the response? Why not just say "webserver"? The attacker is already trying to find vulnerabilities to your network by interacting with the server. Does your server respond in a way that will alert suricata that host X is poking around, resulting in automatic banning? Does the host handle the automatic banning itself, then push the blocked host to your router? Is the software running on the server properly updated, or is it a 13 year old software (sidenote: checked a long list of hosts the other day (not mine) and some of them were running 13 year old software!!!). If it's not updated, why is it not updated? If it is updated, are there any other mitigations enabled? Is the underlying OS hardened in any way? Or is it happily running a completely up to date X ftp server, but also happily refusing logins without enforcing maximum retries? Is every ssh client connected jailed? Is every single php script running in the same pool? Are basedir restrictions being applied, or can the attacker simply upload a script anywhere and execute it? Does the webserver have access to the same memory that the ssh server uses? Can it write to it?
If you did your absolute best to secure a network and somebody managed to get in, you didn't make a mistake. You simply didn't try hard enough.
Using lists+pfsense+suricata you have already cut down the bad traffic by a LARGE amount. Don't simply sit idling and watch the rest of the traffic go through. Did something new pop up in the logs? Investigate it and don't ignore it.
MLS systems aren't simply installing SELinux and setting it to MLS and boasting "Hey, look at me, I'm running an MLS system!". NO. MLS systems are systems that were designed to separate security into different layers, and each layer dealing with something as efficiently as possible.
As long as the systems are capable of protecting themselves, you shouldn't need to worry about mass blocking hosts. That will naturally come on its own. What this does is add yet another layer to the security: Security through obscurity. Note to industry leaders: FOR THE MILLIONTH TIME: SECURITY THROUGH OBSCURITY IS REAL SECURITY.
Now, if there was a way to keep track of the blocked hosts and identify subnets that could be automatically banned…I'm looking at you BBcan177 ;) (same thing the script does, but using pfsense's snort2c table)
Reaction after previewing the post: "When did I type all that?"
-
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
-
Port is wrong on your aliases should be: https://127.0.0.1:443/badips/AbusePalevo.txt not 43
Something looks off on the directory to…
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
-
Port is wrong on your aliases should be: https://127.0.0.1:443/badips/AbusePalevo.txt not 43
Something looks off on the directory to…
Here is where my de-duplicated lists are:
$ ls /usr/local/www/badips ALIENVAULT.txt AbusePalevo.txt AbuseSpyeye.txt AbuseZeus.txt Atlas_Attacks.txt Atlas_Botnets.txt Atlas_Fastflux.txt Atlas_Phishing.txt Atlas_SSH.txt Atlas_Scans.txt Blut_TOR.txt CIArmy.txt DRG_SSH.txt DRG_VNC.txt DRG_http.txt DangerRulez.txt ET_Comp.txt ET_TOR.txt Feodo_Bad.txt Feodo_Block.txt Geopsy.txt IBlock_BT_FS.txt IBlock_BT_Hijack.txt IBlock_BT_Spy.txt IBlock_BT_Web.txt IBlock_Badpeer.txt IBlock_Onion.txt Infiltrated.txt MDL.txt MalwareGroup.txt NOThink_BL.txt NOThink_Malware.txt NOThink_SSH.txt OpenBL.txt SRI_Attackers.txt SRI_CC.txt Shunlist.txt Spamhaus_drop.txt Spamhaus_edrop.txt VMX.txt WatchGuard.txt dShield_Block.txt dShield_Top.txt malc0de.txt
Ok so it should really be "https://127.0.0.1:443/usr/local/www/badips/AbusePalevo.txt"?
When done correctly shouldn't the IPs show up in the lists' Values in Firewall>>>> Aliases?
Also, would pfBlocker.widget.php_PATCH and diag_dns.php_PATCH work with 2.1.5?
When i try to test the patches i get this error:
Output of full patch apply test: /usr/bin/patch --directory=/usr/local/www/ -t -p1 -i /var/patches/5401274d202ba.patch --check --forward --ignore-whitespace Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- /usr/local/www/widgets/widgets/pfBlocker.widget.php 2014-06-28 13:11:18.000000000 -0400 |+++ /usr/local/www/widgets/widgets/pfBlocker.widget.php 2014-06-28 13:06:55.000000000 -0400 -------------------------- No file to patch. Skipping... Hunk #1 ignored at 2. Hunk #2 ignored at 29. Hunk #3 ignored at 39. Hunk #4 ignored at 53. Hunk #5 ignored at 61. Hunk #6 ignored at 84. Hunk #7 ignored at 92. 7 out of 7 hunks ignored--saving rejects to usr/local/www/widgets/widgets/pfBlocker.widget.php.rej done
Output of full patch apply test: /usr/bin/patch --directory=/usr/local/www/ -t -p1 -i /var/patches/5401259cb1b0c.patch --check --forward --ignore-whitespace Hmm... Looks like a unified diff to me... The text leading up to this was: -------------------------- |--- /usr/local/www/diag_dns.php 2014-06-23 14:22:26.000000000 -0400 |+++ /usr/local/www/diag_dns.php 2014-06-23 14:22:02.000000000 -0400 -------------------------- No file to patch. Skipping... Hunk #1 ignored at 114. Hunk #2 ignored at 158. Hunk #3 ignored at 179. Hunk #4 ignored at 276. 4 out of 4 hunks ignored--saving rejects to usr/local/www/diag_dns.php.rej done
-
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
How did you configure the Alias Table? It Should be a "URL Table".
I would also recommend just using the existing IR_ Tables that the Script already created. This saves you the effort of making dozens of Alias Tables.
/usr/local/www/aliastables/IR_MAIL
/usr/local/www/aliastables/IR_PRI1
/usr/local/www/aliastables/IR_IB
/usr/local/www/aliastables/IR_SEC1
/usr/local/www/aliastables/IR_PRI2
/usr/local/www/aliastables/IR_SEC3
/usr/local/www/aliastables/IR_TOR
/usr/local/www/aliastables/IR_SEC2
/usr/local/www/aliastables/IR_PRI3 -
@G.D.:
About IPv6 blacklisting.
It seems that experts in general have an opinion that blacklisting has not been all that effective in IPv4 world, so may as well just abandon it for IPv6, and concentrate on other tactics instead.
Say we still want to blacklist, I think we are going to quit blacklisting individual addresses all together, and ban entire /64 or /42 outright.
Has anyone dabbled in IPv6 blacklisting yet? How is the quality of public IPv6 blacklists?I haven't even started to look at IPv6. I found this old link on Spamhaus website
http://www.spamhaus.org/organization/statement/012/spamhaus-ipv6-blocklists-strategy-statement
I don't know of any IPv6 Blocklists? If anyone has time to research, please forward to the group and we can begin to work out a process to include those.
@jflsakfja:
Now, if there was a way to keep track of the blocked hosts and identify subnets that could be automatically banned…I'm looking at you BBcan177 ;) (same thing the script does, but using pfsense's snort2c table)
In regards to the snort2c file, I think keeping track of Repeated Offenders is a great Idea. This should also involve the pfSense Firewall Blocks also.
I have been working away at getting a Beta of pfBlocker that incorporates my script and some other new features. I believe that I am 90% there. Would be nice to get help from the pfBlocker developers but they seem to have no interest in supporting a new release of pfBlocker. I am not even sure if the Devs want or will support it when I get it finalized. It may end up being a new package…..
A few members have been helping to beta test the package and that is helping weed out issues. If anyone has real time to spare, drop me a PM if you have interest in helping BETA test. Thanks to Cino and wcrowder for their support!! ;) ;)
-
Also, would pfBlocker.widget.php_PATCH and diag_dns.php_PATCH work with 2.1.5?
I would assume that there are changes to those files in 2.1.5, so I will need to look at it and write a patch.
From looking at 2.2 and from the 2.1.5 release notes, I think they removed the ability to Resolve External IP lookups. If thats the case, not sure if we should add the functionality manually or leave it alone? Other possibility is to add a new Log in pfBlocker just for the Blocklists and Resolve from there. This way there is no mucking around with pfSense System Files.
- from the 2.1.5 Release Notes:
Remove javascript alert DNS resolution action from the firewall log
view. It was already removed from 2.2, and it's better not to allow a
GET action to perform that action -
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
How did you configure the Alias Table? It Should be a "URL Table".
I would also recommend just using the existing IR_ Tables that the Script already created. This saves you the effort of making dozens of Alias Tables.
/usr/local/www/aliastables/IR_MAIL
/usr/local/www/aliastables/IR_PRI1
/usr/local/www/aliastables/IR_IB
/usr/local/www/aliastables/IR_SEC1
/usr/local/www/aliastables/IR_PRI2
/usr/local/www/aliastables/IR_SEC3
/usr/local/www/aliastables/IR_TOR
/usr/local/www/aliastables/IR_SEC2
/usr/local/www/aliastables/IR_PRI3Thanks BBcan177,
I'll try the IR_ Tables instead.
Currently with the script installed and only Suricata configured following the guidelines on this thread my RAM usage is on average in the 80s%.
I was thinking of setting up simple cache (squid proxy) along with it but with my RAM usage so high is there anyway i can still get the most out of using Suricata/pfiprep with the squid cache?
What lists/rules would be fine to disable? (i've followed jflsakfja's guidelines but i still think that there are some lists/rules that may not apply to my usage scenario.)I have a 50mbps connection & my pfSense system is an Athlon 64 3000+ with 1.25GB of RAM and 2 nics (LAN/WAN). That's connected to a switch with a wireless AP, 5 wired clients and possibly over 25 wireless clients.
We are basically a media consuming household (HD streaming,downloads,torrents,skype,teamviewer,facebook etc) and if it was up to me I'd block most of these kinds of traffic but all hell would break loose if i did that, seriously.
Other than my OpenVPN server that i have on my pfSense machine, squid cache, suricata, pfiprep…that's about it.Any rules/lists that you recommend disabling to reduce memory usage?
-
@G.D.:
About IPv6 blacklisting.
It seems that experts in general have an opinion that blacklisting has not been all that effective in IPv4 world, so may as well just abandon it for IPv6, and concentrate on other tactics instead.
Say we still want to blacklist, I think we are going to quit blacklisting individual addresses all together, and ban entire /64 or /42 outright.
Has anyone dabbled in IPv6 blacklisting yet? How is the quality of public IPv6 blacklists?I see that jflsakfja already reply and I'd agree with near all except that I'd recommend blocking on subnet boundaries with IPv6 and here's why. /64 is the smallest advertisable block size for IPv6. In fact that size will likely be assigned to every home/consumer connection. A /56 or /64 will likely be the smallest allocated size and you'll get your own /64 or so for your home etc. Some are even saying that could be a /48 but I doubt that. The trick is that not everyone is a home user and you could have issues with hosted systems and hitting an innocent with a /64 block blacklist entry. There are other interesting characteristics about IPv6 and I think we will see some innovation in security due to it once we have a broad uptake of the address system.
So, blacklisting in IPv6 should near certainly be done at those boundaries or we'd be in hairy hell chasing an effective blocking measure. I'll take that conundrum over IPv4 hell we are in now.
-
Thanks BBcan177,
I'll try the IR_ Tables instead.
Currently with the script installed and only Suricata configured following the guidelines on this thread my RAM usage is on average in the 80s%.
I was thinking of setting up simple cache (squid proxy) along with it but with my RAM usage so high is there anyway i can still get the most out of using Suricata/pfiprep with the squid cache?
What lists/rules would be fine to disable? (i've followed jflsakfja's guidelines but i still think that there are some lists/rules that may not apply to my usage scenario.)I have a 50mbps connection & my pfSense system is an Athlon 64 3000+ with 1.25GB of RAM and 2 nics (LAN/WAN). That's connected to a switch with a wireless AP, 5 wired clients and possibly over 25 wireless clients.
We are basically a media consuming household (HD streaming,downloads,torrents,skype,teamviewer,facebook etc) and if it was up to me I'd block most of these kinds of traffic but all hell would break loose if i did that, seriously.
Other than my OpenVPN server that i have on my pfSense machine, squid cache, suricata, pfiprep…that's about it.Any rules/lists that you recommend disabling to reduce memory usage?
I'm seeing 22-25% of 4GB RAM usage on systems configured to the T using this guide, so your 80% of 1.25GB seems about right. 1.25 seems wrong though. What's that, 1GB+256MB stick, or 2x512MB+256MB stick? In either case, getting more RAM is what I would do. 1GB shouldn't be more than $20. If you are using 1GB+256MB just get another GB and replace the 256MB module. Since it takes about 1GB for each system configured according to this guide, you should drop down to 50% RAM, which should give you plenty of room to use other things if you need to.
-
@jflsakfja:
Thanks BBcan177,
I'll try the IR_ Tables instead.
Currently with the script installed and only Suricata configured following the guidelines on this thread my RAM usage is on average in the 80s%.
I was thinking of setting up simple cache (squid proxy) along with it but with my RAM usage so high is there anyway i can still get the most out of using Suricata/pfiprep with the squid cache?
What lists/rules would be fine to disable? (i've followed jflsakfja's guidelines but i still think that there are some lists/rules that may not apply to my usage scenario.)I have a 50mbps connection & my pfSense system is an Athlon 64 3000+ with 1.25GB of RAM and 2 nics (LAN/WAN). That's connected to a switch with a wireless AP, 5 wired clients and possibly over 25 wireless clients.
We are basically a media consuming household (HD streaming,downloads,torrents,skype,teamviewer,facebook etc) and if it was up to me I'd block most of these kinds of traffic but all hell would break loose if i did that, seriously.
Other than my OpenVPN server that i have on my pfSense machine, squid cache, suricata, pfiprep…that's about it.Any rules/lists that you recommend disabling to reduce memory usage?
I'm seeing 22-25% of 4GB RAM usage on systems configured to the T using this guide, so your 80% of 1.25GB seems about right. 1.25 seems wrong though. What's that, 1GB+256MB stick, or 2x512MB+256MB stick? In either case, getting more RAM is what I would do. 1GB shouldn't be more than $20. If you are using 1GB+256MB just get another GB and replace the 256MB module. Since it takes about 1GB for each system configured according to this guide, you should drop down to 50% RAM, which should give you plenty of room to use other things if you need to.
Thanks for the reply.
You're correct it's 1GB+256MB stick and usage is around 83% of that most of the time and if that's average use for the guide then I'll invest in another 1G of RAM. Security is more important than bandwidth/speed ;D
-
Getting a lot of these errors from lists in my syslog:
php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/AbusePalevo.txt.tmp' 'https://127.0.0.1:43/badips/AbusePalevo.txt'' returned exit code '1', the output was 'fetch: https://127.0.0.1:43/badips/AbusePalevo.txt: Operation timed out'
How did you configure the Alias Table? It Should be a "URL Table".
I would also recommend just using the existing IR_ Tables that the Script already created. This saves you the effort of making dozens of Alias Tables.
/usr/local/www/aliastables/IR_MAIL
/usr/local/www/aliastables/IR_PRI1
/usr/local/www/aliastables/IR_IB
/usr/local/www/aliastables/IR_SEC1
/usr/local/www/aliastables/IR_PRI2
/usr/local/www/aliastables/IR_SEC3
/usr/local/www/aliastables/IR_TOR
/usr/local/www/aliastables/IR_SEC2
/usr/local/www/aliastables/IR_PRI3Followed your suggestion and changed the URL Tables and it seems to be working (the IPs show up when i mouse over the Aliases) but i still get syslog errors.
Aug 30 09:40:26 php: rc.filter_configure_sync: The command '/usr/bin/fetch -T 5 -q -o '/var/db/aliastables/IR_SEC2.txt.tmp' 'https://127.0.0.1:443/usr/local/www/aliastables/IR_SEC2'' returned exit code '1', the output was 'fetch: https://127.0.0.1:443/usr/local/www/aliastables/IR_SEC2: Operation timed out'
Is this something to be concerned about?
pfIP Rep widget shows last update was Aug 30 09:43 with status arrow green (up) though…