Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

System Log Aggregation with the Elastic Stack

Jun 08, 2023 • 5 Minute Read

Please set an alt value for this image...
  • Software Development
  • Cloud
  • Learning & Development
  • AWS
  • Security

The Elastic Stack is infinitely configurable for just about any use case that involves collecting, searching, and analyzing data. To make it easy to get up and running, we can use modules to quickly implement a preconfigured pipeline. In this brief tutorial, we are going to use the System module to collect log events from /var/log/secure and /var/log/auth.log and then analyze the log events through module-created dashboards in Kibana. For this demonstration, I am going to be using a t2.medium EC2 instance on the A Cloud Guru Cloud Playground. If you are not a Linux Academy subscriber, feel free to follow along with your own cloud server or virtual machine. All you need is a CentOS 7 host with 1 CPU and 4 GB of memory. Otherwise, the server is pre-configured for you! [caption id="attachment_11242" align="aligncenter" width="525"]Linux Academy Cloud Playground Linux Academy Cloud Playground[/caption]

Elasticsearch

First, we need to install the only prerequisite for Elasticsearch, a Java JDK. I am going to be using OpenJDK, specifically the java-1.8.0-openjdk package:
sudo yum install java-1.8.0-openjdk -y
Now we can install Elasticsearch. I am going to install via RPM, so first let's import Elastic's GPG key:
sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
Now we can download and install the Elasticsearch RPM:
curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.2.rpmsudo rpm --install elasticsearch-6.4.2.rpmsudo systemctl daemon-reload elasticsearch
Let's enable the Elasticsearch service so it starts after a reboot and then start Elasticsearch:
sudo systemctl enable elasticsearchsudo systemctl start elasticsearch
The ingest pipeline created by the Filebeat system module uses a GeoIP processor to look up geographical information for IP addresses found in the log events. For this to work, we first need to install it as a plugin for Elasticsearch:
sudo /usr/share/elasticsearch/bin/elasticsearch-plugin install ingest-geoip
Now we need to restart Elasticsearch in order for it to recognize the new plugin:
sudo systemctl restart elasticsearch

Kibana

We already have the Elastic GPG key imported, so let's download and install the Kibana RPM:
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-6.4.2-x86_64.rpmsudo rpm --install kibana-6.4.2-x86_64.rpm
Now we can start and enable the Kibana service:
sudo systemctl enable kibanasudo systemctl start kibana
Because Kibana and Elasticsearch both come with sensible defaults for a single-node deployment, we do not need to make any configuration changes to either service.

Filebeat

Now we can install the client that will be collecting our logs, Filebeat. Again, because we already have the Elastic GPG key imported, we can download and install the Filebeat RPM:
curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-x86_64.rpmsudo rpm --install filebeat-6.4.2-x86_64.rpm
We want to store our log events in Elasticsearch with a UTC timestamp. That way, Kibana can simply convert from UTC to whatever time zone our browser is in at request time. To enable this conversion, let's uncomment and enable the following variable in /etc/filebeat/modules.d/system.yml.disabled for both the syslog and auth sections:
var.convert_timezone: true
Now we can enable the System module and push the module assets to Elasticsearch and Kibana:
sudo filebeat modules enable systemsudo filebeat setup
Finally, we can enable and start the Filebeat service to begin collecting our system log events:
sudo systemctl enable filebeatsudo systemctl start filebeat

Analyze

By default, Kibana listens on localhost:5601. So in order to browse Kibana in our local web browser, let's use SSH to log in to our host with port forwarding:
ssh username@hostname_or_ip -L 5601:localhost:5601
Now we can navigate to https://localhost:5601 in our local web browser to access our remote instance of Kibana. From Kibana's side navigation pane, select Dashboard and search for "system" to see all the System module dashboards. To take things a step further, you can create your own honeypot by exposing your host to the internet to garner even more log events to analyze. [caption id="attachment_11237" align="aligncenter" width="525"]Syslog Dashboard Syslog Dashboard[/caption] [caption id="attachment_11238" align="aligncenter" width="525"]Sudo Commands Dashboard Sudo Commands Dashboard[/caption] [caption id="attachment_11239" align="aligncenter" width="525"]SSH Logins Dashboard SSH Logins Dashboard[/caption] [caption id="attachment_11240" align="aligncenter" width="525"]New Users and Groups Dashboard New Users and Groups Dashboard[/caption]

Want to know more?

From creating beautiful visualizations to managing the Elastic Stack, Kibana markdown visualization helps you get the most of your data. At A Cloud Guru, we offer a ton of fantastic learning content for Elastic products. Get a brief overview of all the products in the Elastic Stack with the Elastic Stack Essentials course. Or get to know the heart of the Elastic Stack, Elasticsearch, with the Elasticsearch Deep Dive course. When you're ready, prove your mastery of the Elastic Stack by becoming an Elastic Certified Engineer with our latest certification preparation course. All of these courses are packed with Hands-On Labs and lessons that you can follow along with using your very own A Cloud Guru cloud servers. So what are you waiting for? Let's get Elastic! [caption id="attachment_11241" align="aligncenter" width="525"]Elastic Stack Ecosystem The Elastic Stack Ecosystem[/caption]