Navigation

    Netgate Discussion Forum
    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search

    ELK + pfSense 2.3 Working

    General pfSense Questions
    21
    41
    31212
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A
      ando1 last edited by

      OK after a lot of reading and researching, I have successfully created an ELK stack and can monitor my pfsense 2.3 firewall. I am posting the steps I used below along with the files needed. You may need to modify some of the files to fit your IP address and environment. Also I posted the reference links I used to create the steps.

      I wanted to give credit to the sites that I got most of this information from as it helped me in figuring out how to make this work.

      UPDATE 11/17: I also found this site and was able to get version 5 working with Ubuntu server 16+: http://pfelk.3ilson.com/

      My original post on Reddit: https://www.reddit.com/r/PFSENSE/comments/5axoaj/getting_elk_to_work_with_pfsense_23/

      Reference Links:
      http://secretwafflelabs.com/2015/11/06/pfsense-elk/
      https://elijahpaul.co.uk/monitoring-pfsense-2-1-logs-using-elk-logstash-kibana-elasticsearch/
      https://elijahpaul.co.uk/updated-monitoring-pfsense-logs-using-elk-elasticsearch-logstash-kibana-part-1/

      Prerequisites:
      • Ubuntu 14.04 Desktop - http://releases.ubuntu.com/14.04/
      • Kibana 4.5.4
      • Logstash 2.2.4
      • Elasticsearch 2.4.0
      • pfSense 2.3.2

      Files Needed (also in attached zip file)
      (You will need to modify some of these to fit your environment)
      • Kibana4 init script
      • Pfsense 2.2+ grok file - http://secretwafflelabs.com/files/pfsense2-2.grok
      • 02-syslog-input.conf - http://secretwafflelabs.com/files/02-syslog-input.conf
      • 20-syslog-filter.conf - http://secretwafflelabs.com/files/20-syslog-filter.conf
      • 81-pfsense-filter.conf - http://secretwafflelabs.com/files/81-pfsense-filter.conf
      • 99-elasticsearch-output.conf - http://secretwafflelabs.com/files/99-elasticsearch-output.conf
      • Dashboard - http://secretwafflelabs.com/files/Firewall_External_Dash.json
      • Visualizations Export - http://secretwafflelabs.com/files/Firewall_External_Visual.json
      • Saved Searches Export  - http://secretwafflelabs.com/files/export.json

      Steps:
      1. In order to be able to run the below commands as root, log into the Ubuntu desktop and type sudo - i
      2. Install Java

      apt-get remove --purge openjdk*
      
      add-apt-repository -y ppa:webupd8team/java
      
      apt-get update
      
      apt-get -y install oracle-java8-installer
      

      3. Verify java version

      java -version
      

      Output
      java version "1.8.0_111"
              Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
              Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)

      4. Install ElasticSearch

      wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/deb/elasticsearch/2.4.0/elasticsearch-2.4.0.deb
      
      dpkg -i elasticsearch-2.3.4.deb
      

      5. Download and install Logstash

      wget https://download.elastic.co/logstash/logstash/packages/debian/logstash_2.2.4-1_all.deb
      
      dpkg -i logstash_2.2.4-1_all.deb
      

      6. Create a patterns directory for Geo_IP

      cd /etc/logstash/conf.d
      
      mkdir patterns
      

      7. Create pfsense grok file

      cd /etc/logstash/conf.d/patterns
      
      nano pfsense2-2.grok
      

      8. Download the GEO_IP database

      cd /etc/logstash
      
      curl -O "http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz"
      
      gunzip GeoLiteCity.dat.gz
      

      9. Create the logstash conf files

      02-syslog-input.conf

      nano /etc/logstash/02-syslog-input.conf
      
      Copy the contents of 02-syslog-input.conf and save
      
      Modify the port if needed
      

      20-syslog-filter.conf

      nano /etc/logstash/20-syslog-filter.conf
      
      Copy the contents of20-syslog-filter.conf and save
      
      Modify the section "#change to pfSense ip address" to reflect your pfsense IP address
      

      81-pfsense-filter.conf

      nano /etc/logstash/81-pfsense-filter.conf
      
      Copy the contents of81-pfsense-filter.conf  and save
      

      99-elasticsearch-output.conf

      nano /etc/logstash/99-elasticsearch-output.conf
      
      Copy the contents of99-elasticsearch-output.conf and save
      

      10. Download and install Kibana

      wget https://download.elastic.co/kibana/kibana/kibana-4.5.4-linux-x64.tar.gz
      
      untar -xzvf  kibana-4.4.2-linux-x64.tar.gz
      
      mv kibana-4.4.2-linux-x64 /opt/kibana4/
      
      sed -i 's/#pid_file/pid_file/g' /opt/kibana4/config/kibana.yml
      

      11. Create "kibana4.sh" init script and save in /etc/init.d/

      cd /etc/init.d
      
      nano kibana4.sh
      
      Copy the contents of the kibana script and save
      

      12. Ensure services are running. Start if necessary.

      Start elasticsearch:

      service elasticsearch start
      

      Start logstash:

      service logstash start
      

      Start kibana:

       /opt/kibana4/bin/kibana &
      

      13. Log into your pfsense system and point your logs to the ELK IP address:
      Status –> System Logs

      14. Log into http://<ip_address>:5601
      15. Click "Create Index"

      16. On the kibana interface, go to Settings --> Objects and click Import. Import each file.
        • Dashboard - http://secretwafflelabs.com/files/Firewall_External_Dash.json
        • Visualizations Export - http://secretwafflelabs.com/files/Firewall_External_Visual.json
        • Saved Searches Export - http://secretwafflelabs.com/files/export.json

      17. On the kibana interface, go to Settings --> Objects and click the icon to view the new dashboard.

      Troubleshooting
      NOTE: For some reason my logstash doesn’t start at boot. I have to look into this, but haven't had time yet so I just start it manually

      Here are some good troubleshooting commands:

      Ensure logstash and elasticsearch are running and did not error out:

      /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/ --debug
      

      View the logstash stdout in realtime to see if you are receiving syslog messages from pfsense:

       tail -f /var/log/logstash/logstash.stdout
      

      Check the logstash configuration files:

      /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/
      

      If you do not see "Create Index" in step 12, see if logstash created one

      curl http://localhost:9200/_cat/indices
      
      ```[elk_files.zip](/public/_imported_attachments_/1/elk_files.zip)</ip_address>
      1 Reply Last reply Reply Quote 1
      • johnpoz
        johnpoz LAYER 8 Global Moderator last edited by

        While this great, sure many people will be happy.  Why are you using old versions of stuff?

        The current is 5 is it not?  And why such an old version of java?  I just looked on my ubuntu 14.04 vm and 111 is current

        user@uc:~$ java -version
        java version "1.8.0_111"
        Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
        Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
        user@uc:~$

        An intelligent man is sometimes forced to be drunk to spend time with his fools
        If you get confused: Listen to the Music Play
        Please don't Chat/PM me for help, unless mod related
        2440 2.4.5p1 | 2x 3100 2.4.4p3 | 2x 3100 22.01 | 4860 22.01

        1 Reply Last reply Reply Quote 0
        • A
          ando1 last edited by

          @johnpoz:

          While this great, sure many people will be happy.  Why are you using old versions of stuff?

          The current is 5 is it not?  And why such an old version of java?  I just looked on my ubuntu 14.04 vm and 111 is current

          user@uc:~$ java -version
          java version "1.8.0_111"
          Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
          Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
          user@uc:~$

          I used these versions because these were the ones that worked for me. I asked several times on this forum and received no help so I decided to share a working config with others here as I have read many posts where people said they had tried and could not get it going. If you got a newer version to work, then that's great. Post the instructions so everyone can also enjoy.

          1 Reply Last reply Reply Quote 0
          • A
            AR15USR last edited by

            Thanks a bunch for this post ando1. Been looking forward to getting ELK going, will try it out when I get some free time…


            2.4.5-RELEASE-p1 (amd64)

            1 Reply Last reply Reply Quote 0
            • A
              AR15USR last edited by

              I see no Create Index button. The output from your trouble shooting section is:

              yellow open .kibana 1 1 1 0 3.1kb 3.1kb
              

              Also, when importing the 3 .json files, the "Firewall External" imports fine but I get this error on the other two:

              Error: Could not locate that index-pattern (id: logstash-*)
              KbnError@http://0.0.0.0:5601/bundles/commons.bundle.js?v=10000:57463:21
              SavedObjectNotFound@http://0.0.0.0:5601/bundles/commons.bundle.js?v=10000:57592:6
              applyESResp@http://0.0.0.0:5601/bundles/kibana.bundle.js?v=10000:79296:37
              processQueue@http://0.0.0.0:5601/bundles/commons.bundle.js?v=10000:42404:29
              scheduleProcessQueue/<@http://0.0.0.0:5601/bundles/commons.bundle.js?v=10000:42420:28
              $RootScopeProvider/this.$get$RootScopeProvider/this.$get$RootScopeProvider/this.$getdone@http://0.0.0.0:5601/bundles/commons.bundle.js?v=10000:38205:37
              completeRequest@http://0.0.0.0:5601/bundles/commons.bundle.js?v=10000:38403:8
              requestLoaded@http://0.0.0.0:5601/bundles/commons.bundle.js?v=10000:38344:10
              
              

              Also, in steps 4 & 10, the file version numbers don't match fyi…


              2.4.5-RELEASE-p1 (amd64)

              1 Reply Last reply Reply Quote 0
              • A
                AR15USR last edited by

                ando1, any idea what is going on?

                PS I ran everyone of your troubleshooting commands and they all error out fyi…


                2.4.5-RELEASE-p1 (amd64)

                1 Reply Last reply Reply Quote 0
                • A
                  ando1 last edited by

                  @AR15USR:

                  ando1, any idea what is going on?

                  PS I ran everyone of your troubleshooting commands and they all error out fyi…

                  Can you post the output of the logstash debug? You may need to stop the service before you run the command:

                  /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/ –debug

                  Also what error do you get when you run this?

                  /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/

                  Andy

                  1 Reply Last reply Reply Quote 0
                  • A
                    ando1 last edited by

                    For anyone interested in getting the newest version of ELK (v5) working with pfSense, I was able to get do it using the instructions on this siye: http://pfelk.3ilson.com/

                    You need at least Ubuntu server vv16.04.01

                    1 Reply Last reply Reply Quote 0
                    • A
                      AR15USR last edited by

                      @ando1:

                      Can you post the output of the logstash debug? You may need to stop the service before you run the command:

                      /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/ –debug

                      Also what error do you get when you run this?

                      /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/

                      Andy

                      /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/ –debug

                      Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) after  {:level=>:error, :file=>"logstash/agent.rb", :line=>"214", :method=>"execute"}
                      You may be interested in the '--configtest' flag which you can
                      use to validate logstash's configuration before you choose
                      to restart a running system. {:level=>:info, :file=>"logstash/agent.rb", :line=>"216", :method=>"execute"}
                      
                      

                      /opt/logstash/bin/logstash –configtest -f /etc/logstash/conf.d/

                      Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) after  {:level=>:error}
                      
                      

                      2.4.5-RELEASE-p1 (amd64)

                      1 Reply Last reply Reply Quote 0
                      • A
                        ando1 last edited by

                        /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/ –debug

                        Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) after  {:level=>:error, :file=>"logstash/agent.rb", :line=>"214", :method=>"execute"}
                        You may be interested in the '--configtest' flag which you can
                        use to validate logstash's configuration before you choose
                        to restart a running system. {:level=>:info, :file=>"logstash/agent.rb", :line=>"216", :method=>"execute"}
                        
                        

                        /opt/logstash/bin/logstash –configtest -f /etc/logstash/conf.d/

                        Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) after  {:level=>:error}
                        
                        

                        You definitely have a config file issue. Logstash combines all the configuration files into one and then processes them. Since the error is at Line 1 column 1 it sounds like the problem may be in the 02-inputs file. Have a look at all config files and double check they are OK.

                        1 Reply Last reply Reply Quote 0
                        • H
                          hamed_forum last edited by

                          tanks
                          if can creat ova or ovf from vm machine and upload it its very good :)

                          1 Reply Last reply Reply Quote 0
                          • D
                            doktornotor Banned last edited by

                            http://pfelk.3ilson.com/ basically works, but some pointers:

                            1/ There's a PPA for MaxMind:

                            sudo add-apt-repository ppa:maxmind/ppa
                            
                            • see http://dev.maxmind.com/geoip/geoipupdate/ for /etc/GeoIP.conf and run geoipupdate after that. The DB is located in /usr/share/GeoIP/GeoLite2-City.mmdb

                            2/ You really should set up some authentication:

                            https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-package-installation
                            https://www.elastic.co/guide/en/x-pack/current/setting-up-authentication.html
                            https://www.elastic.co/guide/en/x-pack/current/logstash.html

                            1 Reply Last reply Reply Quote 0
                            • johnpoz
                              johnpoz LAYER 8 Global Moderator last edited by

                              Yeah I had issues with the date stuff in logstash config as well.. had to remove the +0400 and timezone..

                              I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.  And have not had any time to do any visualizations - which is what everyone wants ;)

                              An intelligent man is sometimes forced to be drunk to spend time with his fools
                              If you get confused: Listen to the Music Play
                              Please don't Chat/PM me for help, unless mod related
                              2440 2.4.5p1 | 2x 3100 2.4.4p3 | 2x 3100 22.01 | 4860 22.01

                              1 Reply Last reply Reply Quote 0
                              • D
                                doktornotor Banned last edited by

                                @johnpoz:

                                I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

                                Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

                                1 Reply Last reply Reply Quote 0
                                • H
                                  hamed_forum last edited by

                                  Elasticsearch after 10 sec  start its stop

                                  1 Reply Last reply Reply Quote 0
                                  • B
                                    bubbawatson last edited by

                                    @doktornotor:

                                    @johnpoz:

                                    I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

                                    Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

                                    I run elk stack on 1.5  ;D

                                    Small office though. Thx for the info on auth.. I've been wondering how to do that.

                                    1 Reply Last reply Reply Quote 0
                                    • B
                                      BrunoCAVILLE last edited by

                                      I'm currently going through the process of installing ELK but I have an important question. If I redirect the logs from pfSense to the ELK server will I be able to access the raw logs somewhere? I need to have them somewhere and I'm wondering where they would be if they are sent to ELK.

                                      1 Reply Last reply Reply Quote 0
                                      • B
                                        BrunoCAVILLE last edited by

                                        Eveything works well except the maps visualization, someone can help?

                                        ![Capture d’écran 2017-05-05 à 15.18.39.png](/public/imported_attachments/1/Capture d’écran 2017-05-05 à 15.18.39.png)
                                        ![Capture d’écran 2017-05-05 à 15.18.39.png_thumb](/public/imported_attachments/1/Capture d’écran 2017-05-05 à 15.18.39.png_thumb)

                                        1 Reply Last reply Reply Quote 0
                                        • B
                                          BrunoCAVILLE last edited by

                                          Up

                                          Logstash stops after a few seconds (rising heap size didn't help).

                                          1 Reply Last reply Reply Quote 0
                                          • A
                                            AMizil last edited by

                                            @BrunoCAVILLE:

                                            I'm currently going through the process of installing ELK but I have an important question. If I redirect the logs from pfSense to the ELK server will I be able to access the raw logs somewhere? I need to have them somewhere and I'm wondering where they would be if they are sent to ELK.

                                            Status Menu - System Logs - Settings  - and jump to :  Remote log servers - and you can add another 2 Syslog Servers you have ; ex syslog-ng, Splunk etc

                                            1 Reply Last reply Reply Quote 0
                                            • R
                                              ronv last edited by

                                              Hi all,

                                              trying to get this going with PFsense 2.3.4 and ELK 5.4 - all components are talking ok, and I can get the JSON Dashboard, Search and Visualization up and running - almost…:

                                              • when I import the visualizations, Kibana complains that the tags geoip.country_name and geoip.city_name are not available.
                                              • I checked 11-pfsense.conf (which I used from this site) against the spec at https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html, and there does not appear to be any issue with this - that is, it looks like those tags should be returned.

                                              Anything else I could check, or logs I could provide?

                                              kind regards

                                              Ron

                                              1 Reply Last reply Reply Quote 0
                                              • H
                                                hamed_forum last edited by

                                                the log send from pfsense where is save on elk?
                                                i change the elk server and how to export import log on prvise server?

                                                1 Reply Last reply Reply Quote 0
                                                • P
                                                  pfBasic Banned last edited by

                                                  Any differences to get this running on 2.4.0 BETA?

                                                  1 Reply Last reply Reply Quote 0
                                                  • P
                                                    pfBasic Banned last edited by

                                                    I finally got this up & running on pfSense 2.4.0 BETA with the help of AR15USR and some people on IRC.

                                                    Initially I was having trouble getting the Index Patterns to populate in the first step of Kibana. I had followed doktornotor's advice for setting up MaxMind. For whatever reason that didn't work for me so I just did it according to http://pfelk.3ilson.com/ and it worked.

                                                    Next, I had everything stable and logs being imported, but all logs were being tagged "_grokparsefailure" & "_geoip_lookup_failure" and since the pattern wasn't matching, it wasn't putting out any useful fields/information. This was also preventing me from importing the Visualizations.json due to not having the applicable fields available.

                                                    After way too much time troubleshooting and trying to figure out what was happening and why I was given some direction and pointed to the grok debugger by a kind IRC user. https://grokdebug.herokuapp.com/
                                                    For anyone looking to troubleshoot or modify their own grok pattern files, here's what I could make of the fields in 2.4.0 BETA's Rsyslog format. https://forum.pfsense.org/index.php?topic=133354.msg733494#msg733494
                                                    Run a pcap to see exactly what your pfSense box is sending to your ELK server.

                                                    It turned out that all I needed to do was change one character in /etc/logstash/conf.d/patterns/pfsense2-3.grok and reboot.

                                                    I changed line 16 (PFSENSE_LOG_DATA)
                                                    From:

                                                    PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                    

                                                    To:

                                                    PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule})?,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                    

                                                    That's it, one "?".

                                                    After that, log files were parsing successfully, I refreshed my Index Pattern Field List to pull in all of the new fields, imported the Visualizations.json and opened up the Dashboard. All is working now on my single core atom with 2GB DDR2!

                                                    @doktornotor:

                                                    @johnpoz:

                                                    I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

                                                    Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

                                                    I have this up and running (for home use) on an old netbook with an atom N450 (Pineview ~2010, single core 1.66GHz) with 2GB DDR2. I had to significantly lower RAM usage in the following two files to get it working. Currently using <1.5GB RAM, the OS is lubuntu with GUI service disabled. It's also running a Unifi controller. Dashboard is slow to load even for a small home network but it works! I couldn't justify buying anything to get an ELK stack for my home network.

                                                    /etc/elasticsearch/jvm.options
                                                    
                                                    /etc/logstash/jvm.options
                                                    


                                                    1 Reply Last reply Reply Quote 0
                                                    • I
                                                      idealanthony last edited by

                                                      @BrunoCAVILLE:

                                                      Eveything works well except the maps visualization, someone can help?

                                                      @BrunoCAVILLE - I'm having the same problem as you did.  I used the revised visualization file due to the .keyword issue.  I've attempted to merge back in the country sections from the http://pfelk.3ilson.com/ visualization file, but still no luck.  Just wanted to know if you were able to identify/ resolve the issue?

                                                      https://forum.pfsense.org/index.php?topic=125376.0

                                                      1 Reply Last reply Reply Quote 0
                                                      • P
                                                        pfBasic Banned last edited by

                                                        Did you refresh your fields list (Management / Index Patterns) after a number of your log files were successfully parsed?

                                                        If not, do that first then try to import the pf3lk visulization.json.

                                                        The import fails if you don't have the appropriate fields available.


                                                        1 Reply Last reply Reply Quote 0
                                                        • 8
                                                          8ayM last edited by

                                                          I just wanted to add that the Kibana4 init script from the OP is no longer listed via a link as others were so I wanted to copy it here in text form as it did take me a moment to realize that the scripts were all included in a zip file in the op as well.

                                                          Kibana4 init script:

                                                          #!/bin/sh

                                                          /etc/init.d/kibana4 – startup script for kibana4

                                                          bsmith@the408.com 2015-02-20; used elasticsearch init script as template

                                                          https://github.com/akabdog/scripts/edit/master/kibana4_init

                                                          BEGIN INIT INFO

                                                          Provides:          kibana4

                                                          Required-Start:    $network $remote_fs $named

                                                          Required-Stop:    $network $remote_fs $named

                                                          Default-Start:    2 3 4 5

                                                          Default-Stop:      0 1 6

                                                          Short-Description: Starts kibana4

                                                          Description:      Starts kibana4 using start-stop-daemon

                                                          END INIT INFO

                                                          #configure this with wherever you unpacked kibana:
                                                          KIBANA_BIN=/opt/kibana4/bin

                                                          PID_FILE=/var/run/$NAME.pid
                                                          PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN
                                                          DAEMON=$KIBANA_BIN/kibana
                                                          NAME=kibana4
                                                          DESC="Kibana4"

                                                          if [ id -u -ne 0 ]; then
                                                                  echo "You need root privileges to run this script"
                                                                  exit 1
                                                          fi

                                                          . /lib/lsb/init-functions

                                                          if [ -r /etc/default/rcS ]; then
                                                                  . /etc/default/rcS
                                                          fi

                                                          case "$1" in
                                                            start)
                                                                  log_daemon_msg "Starting $DESC"

                                                          pid=pidofproc -p $PID_FILE kibana
                                                                  if [ -n "$pid" ] ; then
                                                                          log_begin_msg "Already running."
                                                                          log_end_msg 0
                                                                          exit 0
                                                                  fi

                                                          # Start Daemon
                                                                  start-stop-daemon –start --pidfile "$PID_FILE" --make-pidfile --background --exec $DAEMON
                                                                  log_end_msg $?
                                                                  ;;
                                                            stop)
                                                                  log_daemon_msg "Stopping $DESC"

                                                          if [ -f "$PID_FILE" ]; then
                                                                          start-stop-daemon –stop --pidfile "$PID_FILE"
                                                                                  --retry=TERM/20/KILL/5 >/dev/null
                                                                          if [ $? -eq 1 ]; then
                                                                                  log_progress_msg "$DESC is not running but pid file exists, cleaning up"
                                                                          elif [ $? -eq 3 ]; then
                                                                                  PID="cat $PID_FILE"
                                                                                  log_failure_msg "Failed to stop $DESC (pid $PID)"
                                                                                  exit 1
                                                                          fi
                                                                          rm -f "$PID_FILE"
                                                                  else
                                                                          log_progress_msg "(not running)"
                                                                  fi
                                                                  log_end_msg 0
                                                                  ;;
                                                            status)
                                                                  status_of_proc -p $PID_FILE kibana kibana && exit 0 || exit $?
                                                              ;;
                                                            restart|force-reload)
                                                                  if [ -f "$PID_FILE" ]; then
                                                                          $0 stop
                                                                          sleep 1
                                                                  fi
                                                                  $0 start
                                                                  ;;
                                                            *)
                                                                  log_success_msg "Usage: $0 {start|stop|restart|force-reload|status}"
                                                                  exit 1
                                                                  ;;
                                                          esac

                                                          1 Reply Last reply Reply Quote 0
                                                          • 8
                                                            8ayM last edited by

                                                            @AR15USR:

                                                            ando1, any idea what is going on?

                                                            PS I ran everyone of your troubleshooting commands and they all error out fyi…

                                                            How did you make out with this? I'm running into the same issue and it doesn't look like there was a resolution to the issue.

                                                            1 Reply Last reply Reply Quote 0
                                                            • A
                                                              AR15USR last edited by

                                                              I'm starting to get this error when my Dashboard refreshes: "Courier Fetch: 28 of 325 shards failed."

                                                              I've noticed that I'm seeing yellow health and replications:

                                                              health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
                                                              green  open   logstash-2017.09.09 yhdtjrKHQVycMOCfBmssWQ   5   0     347962            0    150.8mb        150.8mb
                                                              green  open   logstash-2017.09.10 Yg98wyN5SYav2dnc8OmFxA   5   0     359406            0      158mb          158mb
                                                              green  open   logstash-2017.09.11 mG66BkrDQOSnyJCqI5Ir-w   5   0     380644            0    164.2mb        164.2mb
                                                              green  open   logstash-2017.09.12 y26fNsoORtW6cSx1QcE7ZQ   5   0     390537            0    169.2mb        169.2mb
                                                              green  open   logstash-2017.09.13 MxyncENMRXqxLnMuJIw1rw   5   0     353464            0    152.2mb        152.2mb
                                                              yellow open   logstash-2017.09.14 Gp3dZ-uUTeWv9YIS4calhw   5   1     376975            0    163.5mb        163.5mb
                                                              yellow open   logstash-2017.09.15 cq8n4mYYSWGZZrzrb50B-g   5   1     392566            0    165.2mb        165.2mb
                                                              yellow open   logstash-2017.09.16 u7aF2fGSSmOJmJCU4odO5w   5   1     210728            0     94.5mb         94.5mb
                                                              
                                                              

                                                              Anyone know why this is happening all the sudden?


                                                              2.4.5-RELEASE-p1 (amd64)

                                                              1 Reply Last reply Reply Quote 0
                                                              • A
                                                                AR15USR last edited by

                                                                Anyone able to give me any clues or places to start with the above?


                                                                2.4.5-RELEASE-p1 (amd64)

                                                                1 Reply Last reply Reply Quote 1
                                                                • T
                                                                  tzidore last edited by

                                                                  Hi

                                                                  I have been trying to fetch logs from pfSense 2.4.1 all day. I haven't been able to get the grok pattern to work.
                                                                  The post below does nothing for me. It worked fine on 2.3.4_p1.

                                                                  Any ideas?

                                                                  @pfBasic:

                                                                  I finally got this up & running on pfSense 2.4.0 BETA with the help of AR15USR and some people on IRC.

                                                                  Initially I was having trouble getting the Index Patterns to populate in the first step of Kibana. I had followed doktornotor's advice for setting up MaxMind. For whatever reason that didn't work for me so I just did it according to http://pfelk.3ilson.com/ and it worked.

                                                                  Next, I had everything stable and logs being imported, but all logs were being tagged "_grokparsefailure" & "_geoip_lookup_failure" and since the pattern wasn't matching, it wasn't putting out any useful fields/information. This was also preventing me from importing the Visualizations.json due to not having the applicable fields available.

                                                                  After way too much time troubleshooting and trying to figure out what was happening and why I was given some direction and pointed to the grok debugger by a kind IRC user. https://grokdebug.herokuapp.com/
                                                                  For anyone looking to troubleshoot or modify their own grok pattern files, here's what I could make of the fields in 2.4.0 BETA's Rsyslog format. https://forum.pfsense.org/index.php?topic=133354.msg733494#msg733494
                                                                  Run a pcap to see exactly what your pfSense box is sending to your ELK server.

                                                                  It turned out that all I needed to do was change one character in /etc/logstash/conf.d/patterns/pfsense2-3.grok and reboot.

                                                                  I changed line 16 (PFSENSE_LOG_DATA)
                                                                  From:

                                                                  PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                                  

                                                                  To:

                                                                  PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule})?,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                                  

                                                                  That's it, one "?".

                                                                  After that, log files were parsing successfully, I refreshed my Index Pattern Field List to pull in all of the new fields, imported the Visualizations.json and opened up the Dashboard. All is working now on my single core atom with 2GB DDR2!

                                                                  @doktornotor:

                                                                  @johnpoz:

                                                                  I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

                                                                  Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

                                                                  I have this up and running (for home use) on an old netbook with an atom N450 (Pineview ~2010, single core 1.66GHz) with 2GB DDR2. I had to significantly lower RAM usage in the following two files to get it working. Currently using <1.5GB RAM, the OS is lubuntu with GUI service disabled. It's also running a Unifi controller. Dashboard is slow to load even for a small home network but it works! I couldn't justify buying anything to get an ELK stack for my home network.

                                                                  /etc/elasticsearch/jvm.options
                                                                  
                                                                  /etc/logstash/jvm.options
                                                                  
                                                                  1 Reply Last reply Reply Quote 0
                                                                  • F
                                                                    f11 last edited by

                                                                    So after messing about with the Grok debugger, modifying the .grok file to the following seemed to work:

                                                                    PFSENSE_LOG_DATA (%{INT:rule}),,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                                    PFSENSE_IP_SPECIFIC_DATA (%{PFSENSE_IPv4_SPECIFIC_DATA}|%{PFSENSE_IPv6_SPECIFIC_DATA})
                                                                    PFSENSE_IPv4_SPECIFIC_DATA (%{BASE16NUM:tos}),,(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
                                                                    PFSENSE_IPv4_SPECIFIC_DATA_ECN (%{BASE16NUM:tos}),(%{INT:ecn}),(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
                                                                    PFSENSE_IPv6_SPECIFIC_DATA (%{BASE16NUM:class}),(%{DATA:flow_label}),(%{INT:hop_limit}),(%{WORD:proto}),(%{INT:proto_id}),
                                                                    PFSENSE_IP_DATA (%{INT:length}),(%{IP:src_ip}),(%{IP:dest_ip}),
                                                                    PFSENSE_PROTOCOL_DATA (%{PFSENSE_TCP_DATA}|%{PFSENSE_UDP_DATA}|%{PFSENSE_ICMP_DATA}|%{PFSENSE_CARP_DATA})
                                                                    PFSENSE_TCP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length}),(%{WORD:tcp_flags}),(%{INT:sequence_number}),(%{INT:ack_number}),(%{INT:tcp_window}),(%{DATA:urg_data}),(%{DATA:tcp_options})
                                                                    PFSENSE_UDP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length})
                                                                    PFSENSE_ICMP_DATA (%{PFSENSE_ICMP_TYPE}%{PFSENSE_ICMP_RESPONSE})
                                                                    PFSENSE_ICMP_TYPE (?<icmp_type>(request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply)),
                                                                    PFSENSE_ICMP_RESPONSE (%{PFSENSE_ICMP_ECHO_REQ_REPLY}|%{PFSENSE_ICMP_UNREACHPORT}| %{PFSENSE_ICMP_UNREACHPROTO}|%{PFSENSE_ICMP_UNREACHABLE}|%{PFSENSE_ICMP_NEED_FLAG}|%{PFSENSE_ICMP_TSTAMP}|%{PFSENSE_ICMP_TSTAMP_REPLY})
                                                                    PFSENSE_ICMP_ECHO_REQ_REPLY (%{INT:icmp_echo_id}),(%{INT:icmp_echo_sequence})
                                                                    PFSENSE_ICMP_UNREACHPORT (%{IP:icmp_unreachport_dest_ip}),(%{WORD:icmp_unreachport_protocol}),(%{INT:icmp_unreachport_port})
                                                                    PFSENSE_ICMP_UNREACHPROTO (%{IP:icmp_unreach_dest_ip}),(%{WORD:icmp_unreachproto_protocol})
                                                                    PFSENSE_ICMP_UNREACHABLE (%{GREEDYDATA:icmp_unreachable})
                                                                    PFSENSE_ICMP_NEED_FLAG (%{IP:icmp_need_flag_ip}),(%{INT:icmp_need_flag_mtu})
                                                                    PFSENSE_ICMP_TSTAMP (%{INT:icmp_tstamp_id}),(%{INT:icmp_tstamp_sequence})
                                                                    PFSENSE_ICMP_TSTAMP_REPLY (%{INT:icmp_tstamp_reply_id}),(%{INT:icmp_tstamp_reply_sequence}),(%{INT:icmp_tstamp_reply_otime}),(%{INT:icmp_tstamp_reply_rtime}),(%{INT:icmp_tstamp_reply_ttime})
                                                                    
                                                                    PFSENSE_CARP_DATA (%{WORD:carp_type}),(%{INT:carp_ttl}),(%{INT:carp_vhid}),(%{INT:carp_version}),(%{INT:carp_advbase}),(%{INT:carp_advskew})
                                                                    
                                                                    DHCPD (%{DHCPDISCOVER}|%{DHCPOFFER}|%{DHCPREQUEST}|%{DHCPACK}|%{DHCPINFORM}|%{DHCPRELEASE})
                                                                    DHCPDISCOVER %{WORD:dhcp_action} from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)(: %{GREEDYDATA:dhcp_load_balance})?
                                                                    DHCPOFFER %{WORD:dhcp_action} on %{IPV4:dhcp_client_ip} to %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)
                                                                    DHCPREQUEST %{WORD:dhcp_action} for %{IPV4:dhcp_client_ip}%{SPACE}(\(%{IPV4:dhcp_ip_unknown}\))? from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)(: %{GREEDYDATA:dhcp_request_message})?
                                                                    DHCPACK %{WORD:dhcp_action} on %{IPV4:dhcp_client_ip} to %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)
                                                                    DHCPINFORM %{WORD:dhcp_action} from %{IPV4:dhcp_client_ip} via %(?<dhcp_client_vlan>[0-9a-z_]*)
                                                                    DHCPRELEASE %{WORD:dhcp_action} of %{IPV4:dhcp_client_ip} from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via</dhcp_client_vlan></dhcp_client_vlan></dhcp_client_vlan></dhcp_client_vlan></dhcp_client_vlan></icmp_type>
                                                                    

                                                                    Essentially the line

                                                                    PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                                    
                                                                    

                                                                    Was changed to

                                                                    PFSENSE_LOG_DATA (%{INT:rule}),,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                                    

                                                                    Now logs seem to be parsing correctly and my dashboard is looking populated again.

                                                                    1 Reply Last reply Reply Quote 0
                                                                    • A
                                                                      alex_ncus last edited by

                                                                      NOOB

                                                                      Anyone been able to get latest ELK running in Docker containers and parsing pfsense 2.4.1 logs? Trying to get pfsense to send log to NAS and then having my Mac workstation running Docker with ELK to parse / analyze data staged on NAS.

                                                                      Does this even make sense?

                                                                      1 Reply Last reply Reply Quote 0
                                                                      • D
                                                                        donnydavis last edited by

                                                                        I used EFK and it works great for me

                                                                        1 Reply Last reply Reply Quote 0
                                                                        • V
                                                                          vinchi007 last edited by

                                                                          Thanks for sharing! I've setup my ELK and PFSence 2.2 on FreeNAS jail. one thing to note, if you're using FreeNas, logstash startup script will list only logstash.conf file to load so I've combined all of these conf files into one. Also, note that you need to change location path of "patters" location where grok patterns file will reside in.

                                                                          1 Reply Last reply Reply Quote 0
                                                                          • A
                                                                            AR15USR last edited by

                                                                            Anyone having issues with GROK failure on 2.4.2? I'm seeing more than 50% failures. Seems to be failing on the inbound block/pass logs, the outbound blocks/pass seem to work fine.

                                                                            Ive set it up exactly according to http://pfelk.3ilson.com/ and also have setup MaxMind according to Docs Insdructions: https://forum.pfsense.org/index.php?topic=120937.msg671603#msg671603


                                                                            2.4.5-RELEASE-p1 (amd64)

                                                                            1 Reply Last reply Reply Quote 0
                                                                            • T
                                                                              thhi last edited by

                                                                              May this is useful for someone:

                                                                              We use vlans in our environment and our interface names contain "." (and not only "word" characters). So we modified "PFSENSE_LOG_DATA" in the  grok filter

                                                                              
                                                                              IFACE \b[a-zA-Z0-9.]+\b
                                                                              #PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                                              PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule})?,,(%{INT:tracker}),(%{IFACE:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                                                              
                                                                              
                                                                              1 Reply Last reply Reply Quote 0
                                                                              • G
                                                                                gurulee last edited by

                                                                                I followed this guide initially: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-16-04 . At that point I was seeing data from filebeats in Kibana only.

                                                                                But after merging in these steps, I now do not see any log data from pfsense, or filebeats. Do I need to disable/remove filebeats?

                                                                                G 1 Reply Last reply Reply Quote 0
                                                                                • G
                                                                                  gurulee @gurulee last edited by

                                                                                  @gurulee said in ELK + pfSense 2.3 Working:

                                                                                  Kibana

                                                                                  I believe filebeat was screwing things up. I blew away the vm after hours of racking my brain and reinstalling ELK stack. I ended up rebuilding it according to this guide http://pfelk.3ilson.com/2017/10/pfsense-v24xkibanaelasticsearchlogstash.html?m=1
                                                                                  , and it’s working now. I just need to tweak dashboards and visualizations at this point.

                                                                                  1 Reply Last reply Reply Quote 0
                                                                                  • B
                                                                                    bartkowski last edited by

                                                                                    Have any of you seen https://logz.io offering?

                                                                                    1 Reply Last reply Reply Quote 0
                                                                                    • First post
                                                                                      Last post