Netgate Discussion Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Search
    • Register
    • Login

    ELK + pfSense 2.3 Working

    Scheduled Pinned Locked Moved General pfSense Questions
    41 Posts 21 Posters 39.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A
      ando1
      last edited by

      For anyone interested in getting the newest version of ELK (v5) working with pfSense, I was able to get do it using the instructions on this siye: http://pfelk.3ilson.com/

      You need at least Ubuntu server vv16.04.01

      1 Reply Last reply Reply Quote 0
      • A
        AR15USR
        last edited by

        @ando1:

        Can you post the output of the logstash debug? You may need to stop the service before you run the command:

        /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/ –debug

        Also what error do you get when you run this?

        /opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/

        Andy

        /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/ –debug

        Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) after  {:level=>:error, :file=>"logstash/agent.rb", :line=>"214", :method=>"execute"}
        You may be interested in the '--configtest' flag which you can
        use to validate logstash's configuration before you choose
        to restart a running system. {:level=>:info, :file=>"logstash/agent.rb", :line=>"216", :method=>"execute"}
        
        

        /opt/logstash/bin/logstash –configtest -f /etc/logstash/conf.d/

        Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) after  {:level=>:error}
        
        

        2.6.0-RELEASE

        1 Reply Last reply Reply Quote 0
        • A
          ando1
          last edited by

          /opt/logstash/bin/logstash agent -f /etc/logstash/conf.d/ –debug

          Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) after  {:level=>:error, :file=>"logstash/agent.rb", :line=>"214", :method=>"execute"}
          You may be interested in the '--configtest' flag which you can
          use to validate logstash's configuration before you choose
          to restart a running system. {:level=>:info, :file=>"logstash/agent.rb", :line=>"216", :method=>"execute"}
          
          

          /opt/logstash/bin/logstash –configtest -f /etc/logstash/conf.d/

          Error: Expected one of #, input, filter, output at line 1, column 1 (byte 1) after  {:level=>:error}
          
          

          You definitely have a config file issue. Logstash combines all the configuration files into one and then processes them. Since the error is at Line 1 column 1 it sounds like the problem may be in the 02-inputs file. Have a look at all config files and double check they are OK.

          1 Reply Last reply Reply Quote 0
          • H
            hamed_forum
            last edited by

            tanks
            if can creat ova or ovf from vm machine and upload it its very good :)

            1 Reply Last reply Reply Quote 0
            • D
              doktornotor Banned
              last edited by

              http://pfelk.3ilson.com/ basically works, but some pointers:

              1/ There's a PPA for MaxMind:

              sudo add-apt-repository ppa:maxmind/ppa
              
              • see http://dev.maxmind.com/geoip/geoipupdate/ for /etc/GeoIP.conf and run geoipupdate after that. The DB is located in /usr/share/GeoIP/GeoLite2-City.mmdb

              2/ You really should set up some authentication:

              https://www.elastic.co/guide/en/x-pack/current/installing-xpack.html#xpack-package-installation
              https://www.elastic.co/guide/en/x-pack/current/setting-up-authentication.html
              https://www.elastic.co/guide/en/x-pack/current/logstash.html

              1 Reply Last reply Reply Quote 0
              • johnpozJ
                johnpoz LAYER 8 Global Moderator
                last edited by

                Yeah I had issues with the date stuff in logstash config as well.. had to remove the +0400 and timezone..

                I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.  And have not had any time to do any visualizations - which is what everyone wants ;)

                An intelligent man is sometimes forced to be drunk to spend time with his fools
                If you get confused: Listen to the Music Play
                Please don't Chat/PM me for help, unless mod related
                SG-4860 24.11 | Lab VMs 2.8, 24.11

                1 Reply Last reply Reply Quote 0
                • D
                  doktornotor Banned
                  last edited by

                  @johnpoz:

                  I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

                  Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

                  1 Reply Last reply Reply Quote 0
                  • H
                    hamed_forum
                    last edited by

                    Elasticsearch after 10 sec  start its stop

                    1 Reply Last reply Reply Quote 0
                    • B
                      bubbawatson
                      last edited by

                      @doktornotor:

                      @johnpoz:

                      I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

                      Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

                      I run elk stack on 1.5  ;D

                      Small office though. Thx for the info on auth.. I've been wondering how to do that.

                      1 Reply Last reply Reply Quote 0
                      • B
                        BrunoCAVILLE
                        last edited by

                        I'm currently going through the process of installing ELK but I have an important question. If I redirect the logs from pfSense to the ELK server will I be able to access the raw logs somewhere? I need to have them somewhere and I'm wondering where they would be if they are sent to ELK.

                        1 Reply Last reply Reply Quote 0
                        • B
                          BrunoCAVILLE
                          last edited by

                          Eveything works well except the maps visualization, someone can help?

                          ![Capture d’écran 2017-05-05 à 15.18.39.png](/public/imported_attachments/1/Capture d’écran 2017-05-05 à 15.18.39.png)
                          ![Capture d’écran 2017-05-05 à 15.18.39.png_thumb](/public/imported_attachments/1/Capture d’écran 2017-05-05 à 15.18.39.png_thumb)

                          1 Reply Last reply Reply Quote 0
                          • B
                            BrunoCAVILLE
                            last edited by

                            Up

                            Logstash stops after a few seconds (rising heap size didn't help).

                            1 Reply Last reply Reply Quote 0
                            • A
                              AMizil
                              last edited by

                              @BrunoCAVILLE:

                              I'm currently going through the process of installing ELK but I have an important question. If I redirect the logs from pfSense to the ELK server will I be able to access the raw logs somewhere? I need to have them somewhere and I'm wondering where they would be if they are sent to ELK.

                              Status Menu - System Logs - Settings  - and jump to :  Remote log servers - and you can add another 2 Syslog Servers you have ; ex syslog-ng, Splunk etc

                              1 Reply Last reply Reply Quote 0
                              • R
                                ronv
                                last edited by

                                Hi all,

                                trying to get this going with PFsense 2.3.4 and ELK 5.4 - all components are talking ok, and I can get the JSON Dashboard, Search and Visualization up and running - almost…:

                                • when I import the visualizations, Kibana complains that the tags geoip.country_name and geoip.city_name are not available.
                                • I checked 11-pfsense.conf (which I used from this site) against the spec at https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html, and there does not appear to be any issue with this - that is, it looks like those tags should be returned.

                                Anything else I could check, or logs I could provide?

                                kind regards

                                Ron

                                1 Reply Last reply Reply Quote 0
                                • H
                                  hamed_forum
                                  last edited by

                                  the log send from pfsense where is save on elk?
                                  i change the elk server and how to export import log on prvise server?

                                  1 Reply Last reply Reply Quote 0
                                  • P
                                    pfBasic Banned
                                    last edited by

                                    Any differences to get this running on 2.4.0 BETA?

                                    1 Reply Last reply Reply Quote 0
                                    • P
                                      pfBasic Banned
                                      last edited by

                                      I finally got this up & running on pfSense 2.4.0 BETA with the help of AR15USR and some people on IRC.

                                      Initially I was having trouble getting the Index Patterns to populate in the first step of Kibana. I had followed doktornotor's advice for setting up MaxMind. For whatever reason that didn't work for me so I just did it according to http://pfelk.3ilson.com/ and it worked.

                                      Next, I had everything stable and logs being imported, but all logs were being tagged "_grokparsefailure" & "_geoip_lookup_failure" and since the pattern wasn't matching, it wasn't putting out any useful fields/information. This was also preventing me from importing the Visualizations.json due to not having the applicable fields available.

                                      After way too much time troubleshooting and trying to figure out what was happening and why I was given some direction and pointed to the grok debugger by a kind IRC user. https://grokdebug.herokuapp.com/
                                      For anyone looking to troubleshoot or modify their own grok pattern files, here's what I could make of the fields in 2.4.0 BETA's Rsyslog format. https://forum.pfsense.org/index.php?topic=133354.msg733494#msg733494
                                      Run a pcap to see exactly what your pfSense box is sending to your ELK server.

                                      It turned out that all I needed to do was change one character in /etc/logstash/conf.d/patterns/pfsense2-3.grok and reboot.

                                      I changed line 16 (PFSENSE_LOG_DATA)
                                      From:

                                      PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                      

                                      To:

                                      PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule})?,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                      

                                      That's it, one "?".

                                      After that, log files were parsing successfully, I refreshed my Index Pattern Field List to pull in all of the new fields, imported the Visualizations.json and opened up the Dashboard. All is working now on my single core atom with 2GB DDR2!

                                      @doktornotor:

                                      @johnpoz:

                                      I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

                                      Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

                                      I have this up and running (for home use) on an old netbook with an atom N450 (Pineview ~2010, single core 1.66GHz) with 2GB DDR2. I had to significantly lower RAM usage in the following two files to get it working. Currently using <1.5GB RAM, the OS is lubuntu with GUI service disabled. It's also running a Unifi controller. Dashboard is slow to load even for a small home network but it works! I couldn't justify buying anything to get an ELK stack for my home network.

                                      /etc/elasticsearch/jvm.options
                                      
                                      /etc/logstash/jvm.options
                                      

                                      Untitled.png
                                      Untitled.png_thumb

                                      1 Reply Last reply Reply Quote 0
                                      • I
                                        idealanthony
                                        last edited by

                                        @BrunoCAVILLE:

                                        Eveything works well except the maps visualization, someone can help?

                                        @BrunoCAVILLE - I'm having the same problem as you did.  I used the revised visualization file due to the .keyword issue.  I've attempted to merge back in the country sections from the http://pfelk.3ilson.com/ visualization file, but still no luck.  Just wanted to know if you were able to identify/ resolve the issue?

                                        https://forum.pfsense.org/index.php?topic=125376.0

                                        1 Reply Last reply Reply Quote 0
                                        • P
                                          pfBasic Banned
                                          last edited by

                                          Did you refresh your fields list (Management / Index Patterns) after a number of your log files were successfully parsed?

                                          If not, do that first then try to import the pf3lk visulization.json.

                                          The import fails if you don't have the appropriate fields available.

                                          Untitled.png
                                          Untitled.png_thumb

                                          1 Reply Last reply Reply Quote 0
                                          • 8
                                            8ayM
                                            last edited by

                                            I just wanted to add that the Kibana4 init script from the OP is no longer listed via a link as others were so I wanted to copy it here in text form as it did take me a moment to realize that the scripts were all included in a zip file in the op as well.

                                            Kibana4 init script:

                                            #!/bin/sh

                                            /etc/init.d/kibana4 – startup script for kibana4

                                            bsmith@the408.com 2015-02-20; used elasticsearch init script as template

                                            https://github.com/akabdog/scripts/edit/master/kibana4_init

                                            BEGIN INIT INFO

                                            Provides:          kibana4

                                            Required-Start:    $network $remote_fs $named

                                            Required-Stop:    $network $remote_fs $named

                                            Default-Start:    2 3 4 5

                                            Default-Stop:      0 1 6

                                            Short-Description: Starts kibana4

                                            Description:      Starts kibana4 using start-stop-daemon

                                            END INIT INFO

                                            #configure this with wherever you unpacked kibana:
                                            KIBANA_BIN=/opt/kibana4/bin

                                            PID_FILE=/var/run/$NAME.pid
                                            PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN
                                            DAEMON=$KIBANA_BIN/kibana
                                            NAME=kibana4
                                            DESC="Kibana4"

                                            if [ id -u -ne 0 ]; then
                                                    echo "You need root privileges to run this script"
                                                    exit 1
                                            fi

                                            . /lib/lsb/init-functions

                                            if [ -r /etc/default/rcS ]; then
                                                    . /etc/default/rcS
                                            fi

                                            case "$1" in
                                              start)
                                                    log_daemon_msg "Starting $DESC"

                                            pid=pidofproc -p $PID_FILE kibana
                                                    if [ -n "$pid" ] ; then
                                                            log_begin_msg "Already running."
                                                            log_end_msg 0
                                                            exit 0
                                                    fi

                                            # Start Daemon
                                                    start-stop-daemon –start --pidfile "$PID_FILE" --make-pidfile --background --exec $DAEMON
                                                    log_end_msg $?
                                                    ;;
                                              stop)
                                                    log_daemon_msg "Stopping $DESC"

                                            if [ -f "$PID_FILE" ]; then
                                                            start-stop-daemon –stop --pidfile "$PID_FILE"
                                                                    --retry=TERM/20/KILL/5 >/dev/null
                                                            if [ $? -eq 1 ]; then
                                                                    log_progress_msg "$DESC is not running but pid file exists, cleaning up"
                                                            elif [ $? -eq 3 ]; then
                                                                    PID="cat $PID_FILE"
                                                                    log_failure_msg "Failed to stop $DESC (pid $PID)"
                                                                    exit 1
                                                            fi
                                                            rm -f "$PID_FILE"
                                                    else
                                                            log_progress_msg "(not running)"
                                                    fi
                                                    log_end_msg 0
                                                    ;;
                                              status)
                                                    status_of_proc -p $PID_FILE kibana kibana && exit 0 || exit $?
                                                ;;
                                              restart|force-reload)
                                                    if [ -f "$PID_FILE" ]; then
                                                            $0 stop
                                                            sleep 1
                                                    fi
                                                    $0 start
                                                    ;;
                                              *)
                                                    log_success_msg "Usage: $0 {start|stop|restart|force-reload|status}"
                                                    exit 1
                                                    ;;
                                            esac

                                            1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post
                                            Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.