Aggregating and Analyzing Data with Elastic Stack Modules

3 hours
  • 12 Learning Objectives

About this Hands-on Lab

The Elastic Stack provides a plethora of Beat clients to collect and ship all kinds of data. Furthermore, each Beat client also utilizes modules that come pre-packaged with all the configurations, Elasticsearch index templates, ingest pipelines, and Kibana dashboards. Using these modules allows anyone to quickly get up and running with the Elastic Stack. In this hands-on lab, you will deploy and configure a three-node Elasticsearch cluster; generate and deploy Elasticsearch node certificates; encrypt the Elasticsearch transport cluster; enable user authentication and set built-in user passwords; deploy and configure Kibana to connect to Elasticsearch; deploy and configure Filebeat; enable and use the system module in Filebeat to collect, ship, parse, and visualize system log files; deploy and configure Metricbeat; use the system module in Metricbeat to collect, ship, and visualize system telemetry data in Kibana; and explore the Kibana user interface and analyze your system log and telemetry data.

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Install Elasticsearch on Each Node
  1. Log in to each node as cloud_user via SSH using the public IP addresses provided.

  2. Become the root user:

    sudo su -
  3. Import the Elastic GPG key:

    rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
  4. Download the Elasticsearch 7.6 RPM:

    curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.0-x86_64.rpm
  5. Install Elasticsearch:

    rpm --install elasticsearch-7.6.0-x86_64.rpm
  6. Configure Elasticsearch to start on system boot:

    systemctl enable elasticsearch
Configure Each Node to Form a Three-Node Cluster per the Instructions
  1. Open the /etc/elasticsearch/elasticsearch.yml file:

    vim /etc/elasticsearch/elasticsearch.yml
  2. Change the following line on each node:

    #cluster.name: my-application

    to

    cluster.name: development
  3. Change the following line on the master-1 node:

    #node.name: node-1

    to

    node.name: master-1
  4. Change the following line on the data-1 node:

    #node.name: node-1

    to

    node.name: data-1
  5. Change the following line on the data-2 node:

    #node.name: node-1

    to

    node.name: data-2
  6. Add the following lines on the master-1 node:

    node.master: true
    node.data: false
    node.ingest: true
    node.ml: false
  7. Add the following lines on the data-1 and data-2 nodes:

    node.master: false
    node.data: true
    node.ingest: false
    node.ml: false
  8. Change the following line on each node:

    #network.host: 192.168.0.1

    to

    network.host: [_local_, _site_]
  9. Change the following line on each node:

    #discovery.seed_hosts: ["host1", "host2"]

    to

    discovery.seed_hosts: ["10.0.1.101"]
  10. Change the following line on each node:

    #cluster.initial_master_nodes: ["node-1", "node-2"]

    to

    cluster.initial_master_nodes: ["master-1"]
  11. Save and close /etc/elasticsearch/elasticsearch.yml.

  12. Start Elasticsearch:

    systemctl start elasticsearch
  13. Check your configuration using the _cat/nodes API:

    curl localhost:9200/_cat/nodes?v
Generate and Deploy the Development Certificate to Each Node
  1. Create the /etc/elasticsearch/certs directory on each node:

    mkdir /etc/elasticsearch/certs
  2. On the master-1 node, generate the development PKCS#12 certificate:

    /usr/share/elasticsearch/bin/elasticsearch-certutil cert --name development --out /etc/elasticsearch/certs/development
  3. Allow group read access to the development certificate on the master-1 node:

    chmod 640 /etc/elasticsearch/certs/development
  4. Copy the development certificate from the master-1 node to nodes data-1 and data-2:

    scp /etc/elasticsearch/certs/development 10.0.1.102:/etc/elasticsearch/certs/
    scp /etc/elasticsearch/certs/development 10.0.1.103:/etc/elasticsearch/certs/
Encrypt the Elasticsearch Transport Network on Each Node
  1. Add the following lines to the /etc/elasticsearch/elasticsearch.yml file on each node:

    #
    # ---------------------------------- Security ----------------------------------
    #
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: certs/development
    xpack.security.transport.ssl.truststore.path: certs/development
  2. Restart Elasticsearch:

    systemctl restart elasticsearch
Use the `elasticsearch-setup-passwords` Tool to Set the Password for Each Built-In User on the `master-1` Node
  1. Set the built-in user passwords using the elasticsearch-setup-passwords utility on the master-1 node:

    /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
  2. Use the following passwords:

    User: elastic
    Password: la_elastic_503
    
    User: apm_system
    Password: la_apm_system_503
    
    User: kibana
    Password: la_kibana_503
    
    User: logstash_system
    Password: la_logstash_system_503
    
    User: beats_system
    Password: la_beats_system_503
    
    User: remote_monitoring_user
    Password: la_remote_monitoring_user_503
Deploy Kibana on the `master-1` Node
  1. Download the Kibana 7.6 RPM:

    curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.6.0-x86_64.rpm
  2. Install Kibana:

    rpm --install kibana-7.6.0-x86_64.rpm
  3. Configure Kibana to start on system boot:

    systemctl enable kibana
Configure Kibana to Bind to the Site-Local Address, Listen on Port 8080, and Connect to Elasticsearch
  1. Open the /etc/kibana/kibana file:

    vim /etc/kibana/kibana.yml
  2. Change the following line:

    #server.port: 5601

    to

    server.port: 8080
  3. Change the following line:

    #server.host: "localhost"

    to

    server.host: "10.0.1.101"
  4. Change the following lines:

    #elasticsearch.username: "kibana"
    #elasticsearch.password: "pass"

    to

    elasticsearch.username: "kibana"
    elasticsearch.password: "la_kibana_503"
  5. Save and close /etc/kibana/kibana.yml.

  6. Start Kibana:

    systemctl start kibana
  7. After Kibana has finished starting up, navigate to http://<PUBLIC_IP_ADDRESS_OF_MASTER-1>:8080 in your web browser and log in as:

    • Username: elastic
    • Password: la_elastic_503
Deploy Metricbeat on Each Node
  1. Download the Metricbeat 7.6 RPM:

    curl -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.6.0-x86_64.rpm
  2. Install Metricbeat:

    rpm --install metricbeat-7.6.0-x86_64.rpm
  3. Configure Metricbeat to start on system boot:

    systemctl enable metricbeat
Configure Metricbeat on Each Node to Use the System Module to Ingest System Telemetry to Elasticsearch and Visualize It in Kibana
  1. Open the /etc/metricbeat/metricbeat.yml file:

    vim /etc/metricbeat/metricbeat.yml
  2. Change the following lines on each node:

    setup.kibana:
    
      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      #host: "localhost:5601"

    to

    setup.kibana:
    
      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      host: "10.0.1.101:8080"
  3. Change the following lines on each node:

    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["localhost:9200"]
    
      # Protocol - either `http` (default) or `https`.
      #protocol: "https"
    
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      #username: "elastic"
      #password: "changeme"

    to

    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["10.0.1.101:9200"]
    
      # Protocol - either `http` (default) or `https`.
      #protocol: "https"
    
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      username: "elastic"
      password: "la_elastic_503"
  4. Save and close /etc/metricbeat/metricbeat.yml.

  5. Push the index templates and ingest pipelines to Elasticsearch and the module dashboards to Kibana:

    metricbeat setup
  6. Start Metricbeat:

    systemctl start metricbeat
Deploy Filebeat on Each Node
  1. Download the Filebeat 7.6 RPM:

    curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.0-x86_64.rpm
  2. Install Filebeat:

    rpm --install filebeat-7.6.0-x86_64.rpm
  3. Configure Filebeat to start on system boot:

    systemctl enable filebeat
Configure Filebeat on Each Node to Use the System Module to Ingest System Logs to Elasticsearch and Visualize Them in Kibana
  1. Open the /etc/filebeat/filebeat.yml file:

    vim /etc/filebeat/filebeat.yml
  2. Change the following lines on each node:

    setup.kibana:
    
      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      #host: "localhost:5601"

    to

    setup.kibana:
    
      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      host: "10.0.1.101:8080"
  3. Change the following lines on each node:

    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["localhost:9200"]
    
      # Protocol - either `http` (default) or `https`.
      #protocol: "https"
    
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      #username: "elastic"
      #password: "changeme"

    to

    output.elasticsearch:
      # Array of hosts to connect to.
      hosts: ["10.0.1.101:9200"]
    
      # Protocol - either `http` (default) or `https`.
      #protocol: "https"
    
      # Authentication credentials - either API key or username/password.
      #api_key: "id:api_key"
      username: "elastic"
      password: "la_elastic_503"
  4. Save and close /etc/filebeat/filebeat.yml.

  5. Enable the system module on each node:

    filebeat modules enable system
  6. Push the index templates and ingest pipelines to Elasticsearch and the module dashboards to Kibana:

    filebeat setup
  7. Start Filebeat:

    systemctl start filebeat
Use Kibana to Explore Your System Logs and Telemetry Data
  1. Navigate to http://<PUBLIC_IP_ADDRESS_OF_MASTER-1>:8080 in your web browser and log in as:

    • Username: elastic
    • Password: la_elastic_503
  2. On the side navigation bar, click on Dashboard.

  3. In the search bar, type "Filebeat System" or "Metricbeat System" to find your sample dashboards.

Additional Resources

You work as a systems administrator in a Network Operations Center (NOC). In order to improve visibility into system issues and performance, you are tasked with deploying a development Elastic Stack that will be used as a proof of concept to collect and visualize system log and telemetry data.

First, you need to deploy and configure three Elasticsearch 7.6 nodes belonging to the development cluster and listening on local and site-local addresses. The node names and roles are as follows:

+-----------+----------------+
| Node Name | Roles          |
+-----------+----------------+
| master-1  | master, ingest |
+-----------+----------------+
| data-1    | data           |
+-----------+----------------+
| data-2    | data           |
+-----------+----------------+

Second, you need to secure the Elasticsearch cluster by generating a PKCS#12 development certificate that you will use to encrypt the Elasticsearch transport network. Because this is a development environment, we will use the same certificate for each node with certificate-level verification only. By enabling transport network encryption, you will also be enabling user authentication, which means you will need to set the passwords for each of the built-in Elasticsearch users as follows:

+------------------------+-------------------------------+
| User                   | Password                      |
+------------------------+-------------------------------+
| elastic                | la_elastic_503                |
+------------------------+-------------------------------+
| apm_system             | la_apm_system_503             |
+------------------------+-------------------------------+
| kibana                 | la_kibana_503                 |
+------------------------+-------------------------------+
| logstash_system        | la_logstash_system_503        |
+------------------------+-------------------------------+
| beats_system           | la_beats_system_503           |
+------------------------+-------------------------------+
| remote_monitoring_user | la_remote_monitoring_user_503 |
+------------------------+-------------------------------+

Once you have a secured three-node Elasticsearch 7.6 cluster up and running, you will need to deploy a Kibana 7.6 instance on the master-1 node. In order to access Kibana from your local web browser, you will need to configure Kibana to bind to the site-local address of the master-1 node (10.0.1.101) and listen on port 8080.

After you have Kibana up and running, you will need to deploy both Metricbeat 7.6 and Filebeat 7.6 clients to each node. You will need to configure each Beat client to talk to Kibana at 10.0.1.101:8080 and output all collected data to the master-1 Elasticsearch node as the elastic user. Lastly, you will need to enable and set up the system module for each Beat client.

Once you have your Elastic Stack collecting, shipping, parsing, and visualizing the system log and telemetry data, log in to your Kibana instance on your master-1 node as the elastic user and explore your new Kibana dashboards.

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?