Working with Logstash

1 hour
  • 5 Learning Objectives

About this Hands-on Lab

Your manager has asked you to set up an Elastic Stack to centralize syslog reporting. You will need to install and configure the following:

* Elasticsearch
* Logstash
* Filebeat
* Kibana

Once all the services are installed, working together, and configured to startup after a system reboot, you should access Kibana over an SSH tunnel and make sure the system is working properly.

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Install Elasticsearch

Install Elasticsearch with default settings:

  1. Install Java:

    yum install java-1.8.0-openjdk -y
  2. Import Elastic’s GPG key:

    rpm --import
  3. Download the Elasticsearch RPM:

    curl -O
  4. Install Elasticsearch:

    rpm --install elasticsearch-6.2.3.rpm
  5. Enable and start Elasticsearch:

    systemctl daemon-reload
    systemctl enable elasticsearch
    systemctl start elasticsearch
Install Logstash

Install Logstash with default settings:

  1. Import the Logstash key:

    rpm --import
  2. Add the Logstash repo:

    vi /etc/yum.repos.d/logstash.repo
    name=Elastic repository for 6.x packages
  3. Install Logstash

    yum install logstash -y
  4. Enable and start Logstash:

    systemctl enable logstash
    systemctl start logstash
Install Kibana

Install Kibana with default settings:

  1. Download Kibana:

    curl -O
  2. Install Kibana:

    rpm --install kibana-6.2.3-x86_64.rpm
  3. Enable and start Kibana:

    systemctl enable kibana
    systemctl start kibana
Install Filebeat and use the System Module

Install Filebeat with default settings and use the system module:

  1. Download Filebeat:

    curl -O
  2. Install Filebeat:

    rpm --install filebeat-6.2.3-x86_64.rpm
  3. Edit the system module to convert timestamp timezones to UTC:

    In /etc/filebeat/modules.d/system.yml.disabled, change:

    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    #var.convert_timezone: false


    # Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
    var.convert_timezone: true

    For both the syslog and auth sections.

  4. Enable the system Filebeat module:

    filebeat modules enable system
  5. Install the ingest-geoip filter plugin for Elasticsearch ingest node:

    /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip
  6. Restart Elasticsearch so it can use the new ingest-geoip plugin:

    systemctl restart elasticsearch
  7. Once Elasticsearch starts up, push module assets to Elasticsearch and Kibana:

    filebeat setup
  8. Enable and start Filebeat:

    systemctl enable filebeat
    systemctl start filebeat
Connect to Kibana and Explore the Data

Connect to Kibana and explore your system log data:

  1. From your local machine, SSH with port forwarding to your cloud server’s public IP:

    ssh user_name@public_ip -L 5601:localhost:5601
  2. Navigate to localhost:5601 in your web browser.

  3. Go to the Dashboard plugin via the side navigation bar.

  4. Search for system to filter to your system dashboards.

  5. Explore your system log data with the supplied dashboards.

Additional Resources

  1. Import the elasticsearch key

    rpm --import
  2. Add a logstash.repo

    name=Elastic repository for 6.x packages
  3. Install Elasticsearch:

    • Edit elasticsearch.yml and change the to master.
    • Uncomment out and set it to localhost.
    • Enable and start the elasticsearch service.
  4. Install Logstash. Create a syslog.conf and configure the inputs to use beats.

  5. Setup a grok filter that matches the following:

    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
  6. Add the field received_at with a timestamp and received_from using host to the grok filter.

  7. Add a syslog_pri filter.

  8. Add a date filter for syslog_timestamp and format it to MMM dd HH:mm:ss.

  9. The output should be sent to elasticsearch and stdout.

  10. Set up an SSH tunnel to access Kibana.

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?