Press "Enter" to skip to content

How-To Guides

11/25/2018

Preface

This guide is a collection of the steps I performed throughout the course of building a security lab using open source software. The purpose of the lab was to better understand how platforms, such as Elasticsearch, Logstash, and Kibana (ELK) can be utilized for security event monitoring and network forensics.  I wrote the guide as a reference for myself and also for Security Dredge readers. 

However, if you are an experienced user of Linux and seasoned security analyst, this guide is likely overly detailed.  Although, if this guide saves you time and you can use it as a reference, I hope it allows you to further your understanding of the ELK stack and security event monitoring.

Part I – Preparing Your Lab

After performing the base install of CentOS with the NIST800-174 Security Profile, perform the following steps to prepare your system for Elasticsearch.  Note, this guide assumes you have provisioned your system with the following:

System Specs for Lab:

  • Host ESXi 5.5 or above
  • Guest 12 GB Ram
  • Guest 4 vCore CPU with 1 core per socket
  • Guest 350 GB thin provisioned virtual storage

VM Guest Example:

The above system specifications are suggestions and a starting point.  You might be able to run with a smaller VM but since I am doing some testing with my Elasticsearch box I decided to use the above as a starting point for my lab.

Installing CentOS Updates

Note, if you are running within VMWare now might be a good time to snapshot your system.  In the event something breaks, you can roll it back to the base/fresh install. As part of our system prep, I recommend updating all software on your fresh CentOSbuild. 

  • First check for system update by running – yum check-update
  • Next, update all system software run – yum update (recommended)

Install Net Tools

Install net tools in CentOS so you can run common Unix commands,such as ifconfig, netstat, etc.  I personally find it easier to install and use net tools.  It’s purely a personal preference if this is installed.

  • yum install net-tools -y
  • Now, test by running – ifconfig
    • Make sure your output looks similar to the following and check the IP address and netmask: ens160:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500 inet 192.168.1.25  netmask 255.255.255.0  broadcast 192.168.10.255

Install Java

Elasticsearch requires Java as part of the software dependencies. You can install Java via the following command –

  • Sudo yum install java
  • You can check to verify both the install and version of Java with the following command –
    • java -version
    • Example Output

openjdk version “1.8.0_191”

OpenJDK Runtime Environment (build 1.8.0_191-b12)

OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)

Install TcpDump (Optional)

I install tcpdump because I find it can come in handy if you are troubleshooting.  For example, it can be used to make sure your source systems (data feeds) are sending data to a listening port.  I’ve used this to verify my PFSense box is sending event logs via UDP to port 5514 on my Elasticsearch box.

  • Yum install tcpdump

Example Output:

tcpdump -i ens160 port 9200

Install wget (Optional)

This application is very handy to have when you want to grab/download scripts from the Internet.

  • Yum install wget

Configure CentOS Firewall

Since we are using CentOS with NIST 800-174 security profile, we will likely need to modify the host-based firewall.  To allow traffic to ports we will use, run the following commands –

  • firewall-cmd –permanent –add-port=5601/tcp
    • This is the port Kibana uses for web access
  • firewall-cmd –permanent –add-port=5140/udp
    • This is the port we are using for syslog events from our PFSense firewall
  • firewall-cmd –permanent –add-port= 9995/udp
    • port I am using for netflow (softflow in PFSense)
  • firewall-cmd –reload

Create Repository Information for Elasticsearch

I have found the easiest method to install Elasticsearch is what I learned from the Elasticsearch install guide for CentOS/Redhat.  Create the repo file so we can run the yum installer by creating the following file with the following steps.

  • Change toyum.repos.d - cd /etc/yum.repos.d/
  • Create the elasticserach repo file – vi elasticsearch.repo
  • Now, add the following to yourelasticsearch.repo file – 

[elasticsearch-6.x]

                                                                                                                       name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

  • Save the above information to elasticsearch.repo by pressing ESC key and entering – wq!  This will save the changes to your file.
  • Check to make sure all of the information was saved to your file by running – cat elasticsearch.repo

Your output should look like this –

[elasticsearch-6.x]

                                                                                                                       name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Example Output:

Part I Summary

Now time for a quick summary of what we have done so far:

  1. Provisioned our VM and configured our system specs
  2. Installed CentOS with the NIST 800-171 security profile
  3. Created a snapshot and installed all CentOS updates
  4. Installed optional apps, such as net tools, tcpdump, etc.
  5. Created the repository file to make the Elasticsearch install easier

Part II – Logstash

Logstash allows us to ingest data from various sources, such as PFSense, Bro, and other data feed sources.  Logstash will parse the event logs and neatly transform them into a common format that is easy to view and search within Elasticsearch. Later in the guide we will discuss how to configure your PFSense firewall to send events to Logstash.

Install Logstash

This time installing Kibana is very easy because we can run yum without having to create the repository file.

  • Run – # yum install logstash

Example Output:

As you can see from the example above yum will grab Logstash and automatically install it for you.  After everything is downloaded and installed you should see a message showing Logstash is installed.

Auto Starting the Logstash Services

To make things easier, we can tell the system to auto start Logstash upon boot up run the following commands –

• /bin/systemctl daemon-reload
• /bin/systemctl enable logstash.service

You should see the following output after running the above command –                                                      
•  Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.

Also, just like with Elasticsearch and Kibana, you can also start and stop the service with the following commands –
• sudo systemctl start logstash
• sudo systemctl stop logstash

Output Example:

As you can see in the above example, the Logstash service is showing Active (running).  We should now be able to configure the service to accept PFSense syslog events.  Please note, I am not the author of the PFSense ELK config files.  I used github and the files are from http://pfelk.3ilson.com/.

Downloading PFSense Files for ELK

Before we can start using ELK withPFSense, we need to down load the following files t0 /etc/logstash/conf.d –

Next, we need to create a patterns folder via the following commands –

  • mkdir /etc/logstash/conf.d/patterns
  • cd /etc/logstash/conf.d/patterns/

Now we need to grab the PFSense grok file via the following command –

Configuring PFSense and ELK Files

  • Edit 01-inputs.conf and set your port you wish to listen on for syslog events.

Example:

  • Edit 10-syslog.conf and define your private network IP, e.g. 192.168.1.1

Example:

  • Set your time zone in the 11-pfsense.conf file

Example:

date {

              match => [ “datetime”, “MMM dd HH:mm:ss” ]

              timezone => “America/New York”

            }

            mutate {

Configuring  PFSense Syslog

Now that we have downloaded and configured the PFSense ELK files,we need to configure our PFSense firewall to send log data to our ELK box.  Login as admin to your PFSense system, go to Status -> System Logs -> Settings as shown below –

Example:

Next, configure Remote Logging Options as shown below –

Example:

  • Configure your remote log server by entering the IP address and port.  In this example, we are going to send it to 192.168.1.25:5140.  Pro Tip- Don’t forget to click save so your settings are stored!

Part II Summary

Now time for a quick summary of what we have done so far:

  1. Downloaded and installed Logstash
  2. Configured Logstash to automatically start upon system start/reboot
  3. Verified the Logstash service is up and running
  4. Downloaded and configured PFSense pattern and input/output files
  5. Configured PFSense to send files to our ELK box and saved our settings

Part III – Elasticsearch

Install Elasticsearch

With the foundational work done, we can now install Elasticsearch. Run the following command at the command prompt:
• sudo yum install Elasticsearch

If you followed the steps above, Elasticsearch should be installed on your system. Before we dive in let’s make sure Elasticsearch starts automatically.

Configuring Elasticsearch Services

After you have successfully installed Elasticsearch there are just a few items we need to modify in the config file.

  • Cd /etc/elasticsearch
  • vi elasticsearch.yml
  • Under Node modify the following line – node.name, e.g. YOURLAB

Example:

  • Next, modify Network – network.host: localhost

Example:

  • Using VI issue wq! to save your changes, and you are done!

Auto Starting the Elasticsearch Services

For the sake of this guide I will assume you are using CentOS v7.5 or greater. You can check your version of CentOS via the following command –
• cat /etc/centos-release
• Example output – CentOS Linux release 7.5.1804 (Core)

Let’s check whether your system uses SysV, init, or systemd by running the following command as root –
• ps -p 1
• Example Output –
PID TTY TIME CMD
1 ? 00:00:05 systemd
As you can see from the bold (systemd) text our CentOS is using systemd to tell the OS what to start upon boot up.

To start Elasticsearch upon boot up run the following commands –
• /bin/systemctl daemon-reload
• /bin/systemctl enable elasticsearch.service

You should see the following output after running the above command –

• Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.

Manually Starting the Elasticsearch Services

You can also start and stop the service with the following commands –
• sudo systemctl start elasticsearch.service
• sudo systemctl stop elasticsearch.service

Check Elasticsearch Services:

Now that we have installed Elasticsearch and configured the it to start as a service, we can check the system to make sure everything is up and running by performing the following tests:

• First verify the service is running via – systemctl status elasticsearch

Example Output:

You should see the status as active (running).

If you don’t see the service as active and running, simply issue the following command and recheck the service:

• systemctl restart elasticsearch

Another command we can run to verify the Elasticsearch service is up and running is via the curl command.
• curl -X GET ‘http://localhost:9200’
Example Output:

Another command we can run to verify the Elasticsearch service is up and running is via the curl command.
• curl -X GET ‘http://localhost:9200’

Example Output:

Part III Summary

Now time for a quick summary of what we have done so far in part III of the lab install:

  1. Installed Elasticsearch
  2. Configured Elasticsearch to automatically start upon system start/reboot
  3. Verified the Elasticsearch service is up and running
  4. Restarted the service and verified it is running via curl
  5. If something isn’t working, review part I and part II and re-test

Part IV – Kibana

Now that we have made it this far, we need to add a nice GUI interface for our security event visualization and analytics. Kibana is built on top of Elasticsearch and designed to be used in a web browser to view the data we are sending to Elasticsearch. In addition, you can build charts, maps, tables and pie charts with it. We will create some test charts and a test map later in this section.

Install Kibana

This time installing Kibana is very easy because we can run yum without having to create the repository file.
• Run – # yum install kibana

Example Output:

Auto Starting the Kibana Services

Just like in part II, let’s tell the system to auto start Kibana upon boot up run the following commands –
• /bin/systemctl daemon-reload
• /bin/systemctl enable kibana.service
• You should see the following output after running the above command –
• Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.

Also, just like with Elasticsearch, you can also start and stop the service with the following commands –
• sudo systemctl start kibana
• sudo systemctl stop kibana

Configuring Kibana Services

Okay we are making good progress! Next, we need to configure the Kibana web interface but don’t worry it is very easy. We need to edit kibana.yml using the following steps –
• cd /etc/kibana/
Pro Tip if you get lost simply search for the yml file with the following command –
find / -name kibana.yml

Example Output:

• Use VI or nano to edit the file –
• vi /etc/kibana/kibana.yml
• I usually just copy and paste the line I am editing so I can save a copy of the original config.
Example:

Pro Tip – Now might be a good time to copy the entire kibana.yml configuration file with the following command – cp kibana.yml kibana.yml.bkup

• If you need help using VI please refer to the following website – https://www.howtogeek.com/102468/a-beginners-guide-to-editing-text-files-with-vi/
• Uncomment the following line – server.port: 5601 (Note, you can set this to whatever port you like, such as 443)

Uncomment and set the IP address – server.host: “192.168.1.25”
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.

The default is ‘localhost’, which usually means remote machines will not be able to connect.

To allow connections from remote users, set this parameter to a non-loopback address.

server.host: “192.168.1.25”

Example:

• Save your work within the kibana.yml file using the wq! Command
•  Note, if you make a mistake you can simply restore the original configuration from the backup we created, e.g. cp kibana.yml.bkup kibana.yml. This syntax will overwrite any changes you made and revert to the original file.

Checking the Kibana Service

At this stage our Kibana service should be up and running but before we try to use it let’s check it via the following command –
• systemctl status kibana
• systemctl start kibana – this will start the Kibana service if it isn’t running


Output Example:

As you can see in the above example, the Kibana service is showing Active (running). We should now be able to use our favorite web browser to view our data.

Using Kibana
Using Firefox, I pointed my browser to my internal instance, e.g. 192.168.1.25 and was able to connect to the Kibana service.

Example:

Creating Logging Indices
What are indices you ask? I won’t go extremely deep into how this work because it is very technical, and I will not do it justice. Instead, I will provide a very watered-down definition based off the Elastic.co blog site. In short, indices are a type of index used to organize data logically and physically through data shards, or pieces of data. Think of them as “bags” which contain doctypes or schemas each of which have an ID number. Elasticsearch can search within the bags looking at the terms contained within the document types and assign the terms an ID number or value. The combination is a type of algorithm that allows for rapid searching and full-text “google” like word searches.

Okay, now that we have that part out of the way, let’s create our logging indices. Within the Kibana web interface, click on Logs which is located on the left side of the screen.

Example:

  • We need to define the index pattern, notice it found the index marked in red above. 
  • In the Index pattern space, enter the name of index file using a wildcard.   This will ensure the system will process any Logstash index.
  • Enter the following syntax – logstash-*

Example:

As shown in the example above, we have entered a good syntax denoted by the “Success” message. In addition, the syntax will match any new indexes with “logstash-“ in the name.  Since the indices are incremental based on the date (year, month, day) we will match any future indices.

  • Click next and set the Time Filter field name to – @timestamp

Example:

  • Click the Create index pattern button and you have successfully created a logstash index file for PFSense.

Example:

Basic Searching

You should now be able to click on Discover on the left side Kibana menu and search for interesting events. Maybe you want to see all traffic that has been blocked by your firewall?  We can quickly perform this each by entering the following search string:

  • Enter – “action block” and press enter

Example:

  • You can also set the timeframe for searching and set refresh the screen, so it updates in“near-time”.

Examples:

Part IV Summary

Now time for a quick summary of what we have done so far in part IV of the lab install:

  1. Installed Kibana
  2. Configured Kibana to automatically start upon system start/reboot
  3. Verified the Kibana service is up and running
  4. Restarted the service and verified it is running
  5. Created index patter file for PFSense
  6. Performed some basic searches within Kibana
  7. If something isn’t working, review the steps in part I, II, and III and re-test.

Part V – Troubleshooting

The following are some basic steps you can use to help you in the event you run into problems.  First, always be sure to use the ELK log files, they really are very helpful.  At one point I broke my lab and ELK wasn’t processing my PFSense logs.  I had to start thinking about how ELK works and how it processes data feeds. 

Remember, in our configuration the data is processed in this order:

  1. Logstash listens for the logs you are sending, e.g. PFSense syslog
  2. Elasticsearch parses and indexes the data
  3. Kibana is the front end GUI used to search and create visualizations

I decided to start and step one, which is looking at my Logstash config file.  I remembered I made some changes to allow it to process netflow data.  I decided my next step should be to tail the Logstash log file to see if I could identify any errors.   

I used the following command to monitor the Logstash log file:

  • tail -f /var/log/logstash/logstash-plain.log

I had no idea what I was looking for, but I did notice some “error” messages.  I noticed the errors were being generated from the logstash.yml config file.  After inspecting the file, I noticed I had made some changes to enable netflow.  However, it had some syntax errors, and this was preventing the system from creating the logstash-* index. 

  • Pro Tip this is a good example why creating a backup file is so important! Always backup your config files prior to making any changes. It makes troubleshooting and restoration super easy, e.g. cplogstash.yml.bkup logstash.yml

I edited logstash.yml, removed the errors and restarted all ELK services via the following command:

  • systemctl restart logstash.service | systemctl restart elasticsearch.service | systemctl restart kibana.service

 As I was tailing the logstash-plain.log, I noticed the error messages went away.   I then checked to make sure the system was creating new indices using the following command –

  • curl -XGET localhost:9200/_cat/indices?v

Example Output:

The index above highlighted in red is what I was looking for.  Once I saw this, I knew I could now re-create the index pattern file in Kibana. 

References

http://pfelk.3ilson.com/
https://www.elastic.co
https://github.com
https://www.pfsense.org/