Snort Passlist - Only 1 Alias



  • Installed Snort and enabled for WAN interface.
    – Which IP to block > Both
    -- Home Net > passlist_23180 (my only passlist)
    -- External Net > default
    -- Pass List > passlist_23180
    Aliases
    -- Created Individual IPs for end client Website as well as our LabTech Server
    == These are 2 separate Aliases (Website & LabTech)

    Pass List
    -- passlist_23180 > Added LabTech as an Alias
    -- Checked all to AutoGenerated IP addresses

    My question is, what is the best practices for assigning multiple Aliases to the Pass List?



  • In pfSense an Alias can actually contain other Aliases – in other words you can nest them.  This is how I do it.  I create individual aliases for a series of internal hosts, and then add all those individual aliases to a single new alias that I use in my Pass List.  You can use this same nested alias feature in firewall rules, too.  It's a very powerful feature!

    Bill



  • This is very helpful! However, what do you recommend for FQDN of Websites that may have a load balancer and multiple IPs? For example; irs.gov?



  • @simple1689:

    This is very helpful! However, what do you recommend for FQDN of Websites that may have a load balancer and multiple IPs? For example; irs.gov?

    You CANNOT use FQDN Aliases with Snort nor Suricata.  There is no support for anything but fixed IP addresses within the current code.  The Pass List is parsed at startup and the Aliases are converted to their current IP addresses a that point in time and saved in a fixed table in RAM within Suricata or Snort.  They can't be changed until the next daemon restart.  This is enforced for speed reasons.  You don't want packet processing to wait on a DNS lookup to succeed.  The other problem is that the pf tables used by pfSense to store FQDN resolutions (updated every 5 minutes) can only be brute force searched for matches.  This would potentially slow down packet processing on systems with heavy traffic.  If a user had only 5 to 10 FQDN aliases it would probably not even be noticed.  But I'm sure folks would try to cram in a ton of FQDN aliases and then would be fussing because the firewall throughput hit the toilet… ;)

    Bill



  • Hi Bill.

    I fully understand your point and agree that the lookup tables must be as fast and efficient as possible and DNS lookups are out of scope here. But could it not be possible to 'refresh' the in-memory tables form time to time?

    FQDN listing is a must in some cases. I'm having BIG troubles with snort because of this and disabling rules is not a good path to go. I hold a cloud storage where I keep an offite backup of many thing. Moving the data to that storage fires LOTS of alerts with enough priority to block the storage IP thus breaking it (corrupting it).

    The problem is that the storage is published/accessed by FQDN. So I have to put the IP address of the FQDN name in the supress list. But, from time to time, that IP change, of course without any notification, and everything breaks again. Disabling ALL the alerts for ALL the IPs will render snort useless as in 100GB of data there are many, many signatures found.

    So an intermediate solution may be to update the in-memory tables from time to time (user adjustable will be great). How to do this with the data flowing full speed through the firewall?, you may ask. Well; An alternate table may be built with the updated info. Then an atomic swap of tables may happen and then the former one be discarded to avoid memory leaks. That way updating tables should not, in any manner, affect the performance of the system and we could have this great improvement.

    All this is teorethical as I do not know the actual mechanisms that snort uses to handle the data flow so it is only a working hypothesis. But if there could be any way to achieve this goal many, many of us will be more grateful than we already are.

    At least, thanks for reading this.

    Best regards,
    Miguel.



  • @mikee:

    Hi Bill.

    I fully understand your point and agree that the lookup tables must be as fast and efficient as possible and DNS lookups are out of scope here. But could it not be possible to 'refresh' the in-memory tables form time to time?

    FQDN listing is a must in some cases. I'm having BIG troubles with snort because of this and disabling rules is not a good path to go. I hold a cloud storage where I keep an offite backup of many thing. Moving the data to that storage fires LOTS of alerts with enough priority to block the storage IP thus breaking it (corrupting it).

    The problem is that the storage is published/accessed by FQDN. So I have to put the IP address of the FQDN name in the supress list. But, from time to time, that IP change, of course without any notification, and everything breaks again. Disabling ALL the alerts for ALL the IPs will render snort useless as in 100GB of data there are many, many signatures found.

    So an intermediate solution may be to update the in-memory tables from time to time (user adjustable will be great). How to do this with the data flowing full speed through the firewall?, you may ask. Well; An alternate table may be built with the updated info. Then an atomic swap of tables may happen and then the former one be discarded to avoid memory leaks. That way updating tables should not, in any manner, affect the performance of the system and we could have this great improvement.

    All this is teorethical as I do not know the actual mechanisms that snort uses to handle the data flow so it is only a working hypothesis. But if there could be any way to achieve this goal many, many of us will be more grateful than we already are.

    At least, thanks for reading this.

    Best regards,
    Miguel.

    I have continued to think over scenarios in my head.  Yes, it is conceivable that the custom Snort plugin I use on pfSense could periodically refresh its internal tables.  It would take a fairly major rewrite of the custom code, though.  Refreshing on an interval like every 5 minutes maybe to match up with the filterdns lookups.  The other issue, though, that many folks overlook is that the big web sites folks usually want to use FQDNs for (Amazon Web Services, YouTube, and the like) all have many IP addresses and many scattered data centers.  When you do a DNS query you can possibly get different results each time.  So the firewall could do a lookup on an FQDN, update its internal IP address, and then 10 seconds later a client behind the firewall make a request for the same FQDN and gets another IP entirely.  So a block would still happen as the client would use the IP address it just received while Snort is using a different IP it got 10 seconds earlier (another IP from the load balancer).

    Having a refreshing FQDN table would work for FQDNs that essentially hardly ever change IP address, or that do so somewhat rarely and that don't have a farm of IP addresses behind them.  The feature becomes much less useful if the FQDN (domain name) sits behind a CDN (content distribution network).  For some anecdotal evidence, search the Firewall sub-forum for all the threads from folks wondering why adding "www.youtube.com" as an FQDN alias and using that as the Destination in a block rule does not result in the blocking of YouTube videos on their network.

    Bill



  • Thanks for your reply Bill.

    Again you are quite right about the fact that, after a given FQDN, it may be a lot of IP addresses. But this problem also applies to any FQDNs used all around the platform. There is no guarantee that two consecutive requests result in the same IP addresses returned and if you use them to have any sort of inter-dependence between them you could get undeseired results.

    But I think it may be better to have this than nothing. Because of DNS caching it is very probable that two requests get the same result because they both may be using the same DNS server or even the pfSense itself as a DNS caching server.

    Instead of the message telling that FQDNs are not supported a message could advert the admin that she should use the same DNS server for the pfSense and the internal clients and that using FQDNs is not fool proof because of the chance that two consecutive requests receive different replies.

    I am aware of the problem of blocking access to youtube or other undesired sites by IP lists; that does not work and a different approach (protocol analisys) has to be taken instead.

    But, unlesss the effort to put this into place is so high that makes the task unrewarding, being able to use FQDNs may have more advantages that inconveniences under my point of view.

    Miguel.


Log in to reply