• Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login
Netgate Discussion Forum
  • Categories
  • Recent
  • Tags
  • Popular
  • Users
  • Search
  • Register
  • Login

ELK + pfSense 2.3 Working

Scheduled Pinned Locked Moved General pfSense Questions
41 Posts 21 Posters 38.2k Views
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A
    AMizil
    last edited by May 13, 2017, 8:26 PM

    @BrunoCAVILLE:

    I'm currently going through the process of installing ELK but I have an important question. If I redirect the logs from pfSense to the ELK server will I be able to access the raw logs somewhere? I need to have them somewhere and I'm wondering where they would be if they are sent to ELK.

    Status Menu - System Logs - Settings  - and jump to :  Remote log servers - and you can add another 2 Syslog Servers you have ; ex syslog-ng, Splunk etc

    1 Reply Last reply Reply Quote 0
    • R
      ronv
      last edited by Jun 21, 2017, 2:19 PM

      Hi all,

      trying to get this going with PFsense 2.3.4 and ELK 5.4 - all components are talking ok, and I can get the JSON Dashboard, Search and Visualization up and running - almost…:

      • when I import the visualizations, Kibana complains that the tags geoip.country_name and geoip.city_name are not available.
      • I checked 11-pfsense.conf (which I used from this site) against the spec at https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html, and there does not appear to be any issue with this - that is, it looks like those tags should be returned.

      Anything else I could check, or logs I could provide?

      kind regards

      Ron

      1 Reply Last reply Reply Quote 0
      • H
        hamed_forum
        last edited by Jun 25, 2017, 4:57 AM

        the log send from pfsense where is save on elk?
        i change the elk server and how to export import log on prvise server?

        1 Reply Last reply Reply Quote 0
        • P
          pfBasic Banned
          last edited by Jul 8, 2017, 8:35 AM

          Any differences to get this running on 2.4.0 BETA?

          1 Reply Last reply Reply Quote 0
          • P
            pfBasic Banned
            last edited by Jul 12, 2017, 5:16 PM Jul 12, 2017, 4:54 PM

            I finally got this up & running on pfSense 2.4.0 BETA with the help of AR15USR and some people on IRC.

            Initially I was having trouble getting the Index Patterns to populate in the first step of Kibana. I had followed doktornotor's advice for setting up MaxMind. For whatever reason that didn't work for me so I just did it according to http://pfelk.3ilson.com/ and it worked.

            Next, I had everything stable and logs being imported, but all logs were being tagged "_grokparsefailure" & "_geoip_lookup_failure" and since the pattern wasn't matching, it wasn't putting out any useful fields/information. This was also preventing me from importing the Visualizations.json due to not having the applicable fields available.

            After way too much time troubleshooting and trying to figure out what was happening and why I was given some direction and pointed to the grok debugger by a kind IRC user. https://grokdebug.herokuapp.com/
            For anyone looking to troubleshoot or modify their own grok pattern files, here's what I could make of the fields in 2.4.0 BETA's Rsyslog format. https://forum.pfsense.org/index.php?topic=133354.msg733494#msg733494
            Run a pcap to see exactly what your pfSense box is sending to your ELK server.

            It turned out that all I needed to do was change one character in /etc/logstash/conf.d/patterns/pfsense2-3.grok and reboot.

            I changed line 16 (PFSENSE_LOG_DATA)
            From:

            PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
            

            To:

            PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule})?,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
            

            That's it, one "?".

            After that, log files were parsing successfully, I refreshed my Index Pattern Field List to pull in all of the new fields, imported the Visualizations.json and opened up the Dashboard. All is working now on my single core atom with 2GB DDR2!

            @doktornotor:

            @johnpoz:

            I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

            Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

            I have this up and running (for home use) on an old netbook with an atom N450 (Pineview ~2010, single core 1.66GHz) with 2GB DDR2. I had to significantly lower RAM usage in the following two files to get it working. Currently using <1.5GB RAM, the OS is lubuntu with GUI service disabled. It's also running a Unifi controller. Dashboard is slow to load even for a small home network but it works! I couldn't justify buying anything to get an ELK stack for my home network.

            /etc/elasticsearch/jvm.options
            
            /etc/logstash/jvm.options
            

            Untitled.png
            Untitled.png_thumb

            1 Reply Last reply Reply Quote 0
            • I
              idealanthony
              last edited by Jul 16, 2017, 8:33 AM

              @BrunoCAVILLE:

              Eveything works well except the maps visualization, someone can help?

              @BrunoCAVILLE - I'm having the same problem as you did.  I used the revised visualization file due to the .keyword issue.  I've attempted to merge back in the country sections from the http://pfelk.3ilson.com/ visualization file, but still no luck.  Just wanted to know if you were able to identify/ resolve the issue?

              https://forum.pfsense.org/index.php?topic=125376.0

              1 Reply Last reply Reply Quote 0
              • P
                pfBasic Banned
                last edited by Jul 16, 2017, 11:47 PM

                Did you refresh your fields list (Management / Index Patterns) after a number of your log files were successfully parsed?

                If not, do that first then try to import the pf3lk visulization.json.

                The import fails if you don't have the appropriate fields available.

                Untitled.png
                Untitled.png_thumb

                1 Reply Last reply Reply Quote 0
                • 8
                  8ayM
                  last edited by Sep 14, 2017, 1:57 AM

                  I just wanted to add that the Kibana4 init script from the OP is no longer listed via a link as others were so I wanted to copy it here in text form as it did take me a moment to realize that the scripts were all included in a zip file in the op as well.

                  Kibana4 init script:

                  #!/bin/sh

                  /etc/init.d/kibana4 – startup script for kibana4

                  bsmith@the408.com 2015-02-20; used elasticsearch init script as template

                  https://github.com/akabdog/scripts/edit/master/kibana4_init

                  BEGIN INIT INFO

                  Provides:          kibana4

                  Required-Start:    $network $remote_fs $named

                  Required-Stop:    $network $remote_fs $named

                  Default-Start:    2 3 4 5

                  Default-Stop:      0 1 6

                  Short-Description: Starts kibana4

                  Description:      Starts kibana4 using start-stop-daemon

                  END INIT INFO

                  #configure this with wherever you unpacked kibana:
                  KIBANA_BIN=/opt/kibana4/bin

                  PID_FILE=/var/run/$NAME.pid
                  PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN
                  DAEMON=$KIBANA_BIN/kibana
                  NAME=kibana4
                  DESC="Kibana4"

                  if [ id -u -ne 0 ]; then
                          echo "You need root privileges to run this script"
                          exit 1
                  fi

                  . /lib/lsb/init-functions

                  if [ -r /etc/default/rcS ]; then
                          . /etc/default/rcS
                  fi

                  case "$1" in
                    start)
                          log_daemon_msg "Starting $DESC"

                  pid=pidofproc -p $PID_FILE kibana
                          if [ -n "$pid" ] ; then
                                  log_begin_msg "Already running."
                                  log_end_msg 0
                                  exit 0
                          fi

                  # Start Daemon
                          start-stop-daemon –start --pidfile "$PID_FILE" --make-pidfile --background --exec $DAEMON
                          log_end_msg $?
                          ;;
                    stop)
                          log_daemon_msg "Stopping $DESC"

                  if [ -f "$PID_FILE" ]; then
                                  start-stop-daemon –stop --pidfile "$PID_FILE"
                                          --retry=TERM/20/KILL/5 >/dev/null
                                  if [ $? -eq 1 ]; then
                                          log_progress_msg "$DESC is not running but pid file exists, cleaning up"
                                  elif [ $? -eq 3 ]; then
                                          PID="cat $PID_FILE"
                                          log_failure_msg "Failed to stop $DESC (pid $PID)"
                                          exit 1
                                  fi
                                  rm -f "$PID_FILE"
                          else
                                  log_progress_msg "(not running)"
                          fi
                          log_end_msg 0
                          ;;
                    status)
                          status_of_proc -p $PID_FILE kibana kibana && exit 0 || exit $?
                      ;;
                    restart|force-reload)
                          if [ -f "$PID_FILE" ]; then
                                  $0 stop
                                  sleep 1
                          fi
                          $0 start
                          ;;
                    *)
                          log_success_msg "Usage: $0 {start|stop|restart|force-reload|status}"
                          exit 1
                          ;;
                  esac

                  1 Reply Last reply Reply Quote 0
                  • 8
                    8ayM
                    last edited by Sep 14, 2017, 2:33 AM

                    @AR15USR:

                    ando1, any idea what is going on?

                    PS I ran everyone of your troubleshooting commands and they all error out fyi…

                    How did you make out with this? I'm running into the same issue and it doesn't look like there was a resolution to the issue.

                    1 Reply Last reply Reply Quote 0
                    • A
                      AR15USR
                      last edited by Sep 16, 2017, 2:40 PM Sep 16, 2017, 1:18 PM

                      I'm starting to get this error when my Dashboard refreshes: "Courier Fetch: 28 of 325 shards failed."

                      I've noticed that I'm seeing yellow health and replications:

                      health status index               uuid                   pri rep docs.count docs.deleted store.size pri.store.size
                      green  open   logstash-2017.09.09 yhdtjrKHQVycMOCfBmssWQ   5   0     347962            0    150.8mb        150.8mb
                      green  open   logstash-2017.09.10 Yg98wyN5SYav2dnc8OmFxA   5   0     359406            0      158mb          158mb
                      green  open   logstash-2017.09.11 mG66BkrDQOSnyJCqI5Ir-w   5   0     380644            0    164.2mb        164.2mb
                      green  open   logstash-2017.09.12 y26fNsoORtW6cSx1QcE7ZQ   5   0     390537            0    169.2mb        169.2mb
                      green  open   logstash-2017.09.13 MxyncENMRXqxLnMuJIw1rw   5   0     353464            0    152.2mb        152.2mb
                      yellow open   logstash-2017.09.14 Gp3dZ-uUTeWv9YIS4calhw   5   1     376975            0    163.5mb        163.5mb
                      yellow open   logstash-2017.09.15 cq8n4mYYSWGZZrzrb50B-g   5   1     392566            0    165.2mb        165.2mb
                      yellow open   logstash-2017.09.16 u7aF2fGSSmOJmJCU4odO5w   5   1     210728            0     94.5mb         94.5mb
                      
                      

                      Anyone know why this is happening all the sudden?


                      2.6.0-RELEASE

                      1 Reply Last reply Reply Quote 0
                      • A
                        AR15USR
                        last edited by Sep 17, 2017, 5:01 PM

                        Anyone able to give me any clues or places to start with the above?


                        2.6.0-RELEASE

                        1 Reply Last reply Reply Quote 1
                        • T
                          tzidore
                          last edited by Oct 26, 2017, 9:38 AM Oct 26, 2017, 8:55 AM

                          Hi

                          I have been trying to fetch logs from pfSense 2.4.1 all day. I haven't been able to get the grok pattern to work.
                          The post below does nothing for me. It worked fine on 2.3.4_p1.

                          Any ideas?

                          @pfBasic:

                          I finally got this up & running on pfSense 2.4.0 BETA with the help of AR15USR and some people on IRC.

                          Initially I was having trouble getting the Index Patterns to populate in the first step of Kibana. I had followed doktornotor's advice for setting up MaxMind. For whatever reason that didn't work for me so I just did it according to http://pfelk.3ilson.com/ and it worked.

                          Next, I had everything stable and logs being imported, but all logs were being tagged "_grokparsefailure" & "_geoip_lookup_failure" and since the pattern wasn't matching, it wasn't putting out any useful fields/information. This was also preventing me from importing the Visualizations.json due to not having the applicable fields available.

                          After way too much time troubleshooting and trying to figure out what was happening and why I was given some direction and pointed to the grok debugger by a kind IRC user. https://grokdebug.herokuapp.com/
                          For anyone looking to troubleshoot or modify their own grok pattern files, here's what I could make of the fields in 2.4.0 BETA's Rsyslog format. https://forum.pfsense.org/index.php?topic=133354.msg733494#msg733494
                          Run a pcap to see exactly what your pfSense box is sending to your ELK server.

                          It turned out that all I needed to do was change one character in /etc/logstash/conf.d/patterns/pfsense2-3.grok and reboot.

                          I changed line 16 (PFSENSE_LOG_DATA)
                          From:

                          PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                          

                          To:

                          PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule})?,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                          

                          That's it, one "?".

                          After that, log files were parsing successfully, I refreshed my Index Pattern Field List to pull in all of the new fields, imported the Visualizations.json and opened up the Dashboard. All is working now on my single core atom with 2GB DDR2!

                          @doktornotor:

                          @johnpoz:

                          I have it running, but elasticstack doesn't seem to want to stay running.  Haven't had time to look into why.

                          Make sure you've allocated at least 4GiB of RAM to this thing. (Java  >:( ::))

                          I have this up and running (for home use) on an old netbook with an atom N450 (Pineview ~2010, single core 1.66GHz) with 2GB DDR2. I had to significantly lower RAM usage in the following two files to get it working. Currently using <1.5GB RAM, the OS is lubuntu with GUI service disabled. It's also running a Unifi controller. Dashboard is slow to load even for a small home network but it works! I couldn't justify buying anything to get an ELK stack for my home network.

                          /etc/elasticsearch/jvm.options
                          
                          /etc/logstash/jvm.options
                          
                          1 Reply Last reply Reply Quote 0
                          • F
                            f11
                            last edited by Oct 26, 2017, 5:19 PM

                            So after messing about with the Grok debugger, modifying the .grok file to the following seemed to work:

                            PFSENSE_LOG_DATA (%{INT:rule}),,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                            PFSENSE_IP_SPECIFIC_DATA (%{PFSENSE_IPv4_SPECIFIC_DATA}|%{PFSENSE_IPv6_SPECIFIC_DATA})
                            PFSENSE_IPv4_SPECIFIC_DATA (%{BASE16NUM:tos}),,(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
                            PFSENSE_IPv4_SPECIFIC_DATA_ECN (%{BASE16NUM:tos}),(%{INT:ecn}),(%{INT:ttl}),(%{INT:id}),(%{INT:offset}),(%{WORD:flags}),(%{INT:proto_id}),(%{WORD:proto}),
                            PFSENSE_IPv6_SPECIFIC_DATA (%{BASE16NUM:class}),(%{DATA:flow_label}),(%{INT:hop_limit}),(%{WORD:proto}),(%{INT:proto_id}),
                            PFSENSE_IP_DATA (%{INT:length}),(%{IP:src_ip}),(%{IP:dest_ip}),
                            PFSENSE_PROTOCOL_DATA (%{PFSENSE_TCP_DATA}|%{PFSENSE_UDP_DATA}|%{PFSENSE_ICMP_DATA}|%{PFSENSE_CARP_DATA})
                            PFSENSE_TCP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length}),(%{WORD:tcp_flags}),(%{INT:sequence_number}),(%{INT:ack_number}),(%{INT:tcp_window}),(%{DATA:urg_data}),(%{DATA:tcp_options})
                            PFSENSE_UDP_DATA (%{INT:src_port}),(%{INT:dest_port}),(%{INT:data_length})
                            PFSENSE_ICMP_DATA (%{PFSENSE_ICMP_TYPE}%{PFSENSE_ICMP_RESPONSE})
                            PFSENSE_ICMP_TYPE (?<icmp_type>(request|reply|unreachproto|unreachport|unreach|timeexceed|paramprob|redirect|maskreply|needfrag|tstamp|tstampreply)),
                            PFSENSE_ICMP_RESPONSE (%{PFSENSE_ICMP_ECHO_REQ_REPLY}|%{PFSENSE_ICMP_UNREACHPORT}| %{PFSENSE_ICMP_UNREACHPROTO}|%{PFSENSE_ICMP_UNREACHABLE}|%{PFSENSE_ICMP_NEED_FLAG}|%{PFSENSE_ICMP_TSTAMP}|%{PFSENSE_ICMP_TSTAMP_REPLY})
                            PFSENSE_ICMP_ECHO_REQ_REPLY (%{INT:icmp_echo_id}),(%{INT:icmp_echo_sequence})
                            PFSENSE_ICMP_UNREACHPORT (%{IP:icmp_unreachport_dest_ip}),(%{WORD:icmp_unreachport_protocol}),(%{INT:icmp_unreachport_port})
                            PFSENSE_ICMP_UNREACHPROTO (%{IP:icmp_unreach_dest_ip}),(%{WORD:icmp_unreachproto_protocol})
                            PFSENSE_ICMP_UNREACHABLE (%{GREEDYDATA:icmp_unreachable})
                            PFSENSE_ICMP_NEED_FLAG (%{IP:icmp_need_flag_ip}),(%{INT:icmp_need_flag_mtu})
                            PFSENSE_ICMP_TSTAMP (%{INT:icmp_tstamp_id}),(%{INT:icmp_tstamp_sequence})
                            PFSENSE_ICMP_TSTAMP_REPLY (%{INT:icmp_tstamp_reply_id}),(%{INT:icmp_tstamp_reply_sequence}),(%{INT:icmp_tstamp_reply_otime}),(%{INT:icmp_tstamp_reply_rtime}),(%{INT:icmp_tstamp_reply_ttime})
                            
                            PFSENSE_CARP_DATA (%{WORD:carp_type}),(%{INT:carp_ttl}),(%{INT:carp_vhid}),(%{INT:carp_version}),(%{INT:carp_advbase}),(%{INT:carp_advskew})
                            
                            DHCPD (%{DHCPDISCOVER}|%{DHCPOFFER}|%{DHCPREQUEST}|%{DHCPACK}|%{DHCPINFORM}|%{DHCPRELEASE})
                            DHCPDISCOVER %{WORD:dhcp_action} from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)(: %{GREEDYDATA:dhcp_load_balance})?
                            DHCPOFFER %{WORD:dhcp_action} on %{IPV4:dhcp_client_ip} to %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)
                            DHCPREQUEST %{WORD:dhcp_action} for %{IPV4:dhcp_client_ip}%{SPACE}(\(%{IPV4:dhcp_ip_unknown}\))? from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)(: %{GREEDYDATA:dhcp_request_message})?
                            DHCPACK %{WORD:dhcp_action} on %{IPV4:dhcp_client_ip} to %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via (?<dhcp_client_vlan>[0-9a-z_]*)
                            DHCPINFORM %{WORD:dhcp_action} from %{IPV4:dhcp_client_ip} via %(?<dhcp_client_vlan>[0-9a-z_]*)
                            DHCPRELEASE %{WORD:dhcp_action} of %{IPV4:dhcp_client_ip} from %{COMMONMAC:dhcp_client_mac}%{SPACE}(\(%{GREEDYDATA:dhcp_client_hostname}\))? via</dhcp_client_vlan></dhcp_client_vlan></dhcp_client_vlan></dhcp_client_vlan></dhcp_client_vlan></icmp_type>
                            

                            Essentially the line

                            PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                            
                            

                            Was changed to

                            PFSENSE_LOG_DATA (%{INT:rule}),,,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                            

                            Now logs seem to be parsing correctly and my dashboard is looking populated again.

                            1 Reply Last reply Reply Quote 0
                            • A
                              alex_ncus
                              last edited by Nov 7, 2017, 3:59 PM

                              NOOB

                              Anyone been able to get latest ELK running in Docker containers and parsing pfsense 2.4.1 logs? Trying to get pfsense to send log to NAS and then having my Mac workstation running Docker with ELK to parse / analyze data staged on NAS.

                              Does this even make sense?

                              1 Reply Last reply Reply Quote 0
                              • D
                                donnydavis
                                last edited by Nov 7, 2017, 10:49 PM

                                I used EFK and it works great for me

                                1 Reply Last reply Reply Quote 0
                                • V
                                  vinchi007
                                  last edited by Feb 4, 2018, 9:45 PM

                                  Thanks for sharing! I've setup my ELK and PFSence 2.2 on FreeNAS jail. one thing to note, if you're using FreeNas, logstash startup script will list only logstash.conf file to load so I've combined all of these conf files into one. Also, note that you need to change location path of "patters" location where grok patterns file will reside in.

                                  1 Reply Last reply Reply Quote 0
                                  • A
                                    AR15USR
                                    last edited by Feb 5, 2018, 6:06 PM

                                    Anyone having issues with GROK failure on 2.4.2? I'm seeing more than 50% failures. Seems to be failing on the inbound block/pass logs, the outbound blocks/pass seem to work fine.

                                    Ive set it up exactly according to http://pfelk.3ilson.com/ and also have setup MaxMind according to Docs Insdructions: https://forum.pfsense.org/index.php?topic=120937.msg671603#msg671603


                                    2.6.0-RELEASE

                                    1 Reply Last reply Reply Quote 0
                                    • T
                                      thhi
                                      last edited by Mar 16, 2018, 10:19 AM

                                      May this is useful for someone:

                                      We use vlans in our environment and our interface names contain "." (and not only "word" characters). So we modified "PFSENSE_LOG_DATA" in the  grok filter

                                      
                                      IFACE \b[a-zA-Z0-9.]+\b
                                      #PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule}),,(%{INT:tracker}),(%{WORD:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                      PFSENSE_LOG_DATA (%{INT:rule}),(%{INT:sub_rule})?,,(%{INT:tracker}),(%{IFACE:iface}),(%{WORD:reason}),(%{WORD:action}),(%{WORD:direction}),(%{INT:ip_ver}),
                                      
                                      
                                      1 Reply Last reply Reply Quote 0
                                      • G
                                        gurulee
                                        last edited by Nov 22, 2018, 4:22 PM

                                        I followed this guide initially: https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elastic-stack-on-ubuntu-16-04 . At that point I was seeing data from filebeats in Kibana only.

                                        But after merging in these steps, I now do not see any log data from pfsense, or filebeats. Do I need to disable/remove filebeats?

                                        G 1 Reply Last reply Nov 23, 2018, 4:45 PM Reply Quote 0
                                        • G
                                          gurulee @gurulee
                                          last edited by Nov 23, 2018, 4:45 PM

                                          @gurulee said in ELK + pfSense 2.3 Working:

                                          Kibana

                                          I believe filebeat was screwing things up. I blew away the vm after hours of racking my brain and reinstalling ELK stack. I ended up rebuilding it according to this guide http://pfelk.3ilson.com/2017/10/pfsense-v24xkibanaelasticsearchlogstash.html?m=1
                                          , and it’s working now. I just need to tweak dashboards and visualizations at this point.

                                          1 Reply Last reply Reply Quote 0
                                          • First post
                                            Last post
                                          Copyright 2025 Rubicon Communications LLC (Netgate). All rights reserved.
                                            [[user:consent.lead]]
                                            [[user:consent.not_received]]