Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.
  • Labs icon Lab
  • A Cloud Guru
Google Cloud Platform icon
Labs

Aggregating and Analyzing Data with Elastic Stack Modules

The Elastic Stack provides a plethora of Beat clients to collect and ship all kinds of data. Furthermore, each Beat client also utilizes modules that come pre-packaged with all the configurations, Elasticsearch index templates, ingest pipelines, and Kibana dashboards. Using these modules allows anyone to quickly get up and running with the Elastic Stack. In this hands-on lab, you will deploy and configure a three-node Elasticsearch cluster; generate and deploy Elasticsearch node certificates; encrypt the Elasticsearch transport cluster; enable user authentication and set built-in user passwords; deploy and configure Kibana to connect to Elasticsearch; deploy and configure Filebeat; enable and use the system module in Filebeat to collect, ship, parse, and visualize system log files; deploy and configure Metricbeat; use the system module in Metricbeat to collect, ship, and visualize system telemetry data in Kibana; and explore the Kibana user interface and analyze your system log and telemetry data.

Google Cloud Platform icon
Labs

Path Info

Level
Clock icon Beginner
Duration
Clock icon 3h 0m
Published
Clock icon Mar 06, 2020

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Install Elasticsearch on Each Node

    1. Log in to each node as cloud_user via SSH using the public IP addresses provided.

    2. Become the root user:

      sudo su -
      
    3. Import the Elastic GPG key:

      rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
      
    4. Download the Elasticsearch 7.6 RPM:

      curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.0-x86_64.rpm
      
    5. Install Elasticsearch:

      rpm --install elasticsearch-7.6.0-x86_64.rpm
      
    6. Configure Elasticsearch to start on system boot:

      systemctl enable elasticsearch
      
  2. Challenge

    Configure Each Node to Form a Three-Node Cluster per the Instructions

    1. Open the /etc/elasticsearch/elasticsearch.yml file:

      vim /etc/elasticsearch/elasticsearch.yml
      
    2. Change the following line on each node:

      #cluster.name: my-application
      

      to

      cluster.name: development
      
    3. Change the following line on the master-1 node:

      #node.name: node-1
      

      to

      node.name: master-1
      
    4. Change the following line on the data-1 node:

      #node.name: node-1
      

      to

      node.name: data-1
      
    5. Change the following line on the data-2 node:

      #node.name: node-1
      

      to

      node.name: data-2
      
    6. Add the following lines on the master-1 node:

      node.master: true
      node.data: false
      node.ingest: true
      node.ml: false
      
    7. Add the following lines on the data-1 and data-2 nodes:

      node.master: false
      node.data: true
      node.ingest: false
      node.ml: false
      
    8. Change the following line on each node:

      #network.host: 192.168.0.1
      

      to

      network.host: [_local_, _site_]
      
    9. Change the following line on each node:

      #discovery.seed_hosts: ["host1", "host2"]
      

      to

      discovery.seed_hosts: ["10.0.1.101"]
      
    10. Change the following line on each node:

      #cluster.initial_master_nodes: ["node-1", "node-2"]
      

      to

      cluster.initial_master_nodes: ["master-1"]
      
    11. Save and close /etc/elasticsearch/elasticsearch.yml.

    12. Start Elasticsearch:

      systemctl start elasticsearch
      
    13. Check your configuration using the _cat/nodes API:

      curl localhost:9200/_cat/nodes?v
      
  3. Challenge

    Generate and Deploy the Development Certificate to Each Node

    1. Create the /etc/elasticsearch/certs directory on each node:

      mkdir /etc/elasticsearch/certs
      
    2. On the master-1 node, generate the development PKCS#12 certificate:

      /usr/share/elasticsearch/bin/elasticsearch-certutil cert --name development --out /etc/elasticsearch/certs/development
      
    3. Allow group read access to the development certificate on the master-1 node:

      chmod 640 /etc/elasticsearch/certs/development
      
    4. Copy the development certificate from the master-1 node to nodes data-1 and data-2:

      scp /etc/elasticsearch/certs/development 10.0.1.102:/etc/elasticsearch/certs/
      scp /etc/elasticsearch/certs/development 10.0.1.103:/etc/elasticsearch/certs/
      
  4. Challenge

    Encrypt the Elasticsearch Transport Network on Each Node

    1. Add the following lines to the /etc/elasticsearch/elasticsearch.yml file on each node:

      #
      # ---------------------------------- Security ----------------------------------
      #
      xpack.security.enabled: true
      xpack.security.transport.ssl.enabled: true
      xpack.security.transport.ssl.verification_mode: certificate
      xpack.security.transport.ssl.keystore.path: certs/development
      xpack.security.transport.ssl.truststore.path: certs/development
      
    2. Restart Elasticsearch:

      systemctl restart elasticsearch
      
  5. Challenge

    Use the `elasticsearch-setup-passwords` Tool to Set the Password for Each Built-In User on the `master-1` Node

    1. Set the built-in user passwords using the elasticsearch-setup-passwords utility on the master-1 node:

      /usr/share/elasticsearch/bin/elasticsearch-setup-passwords interactive
      
    2. Use the following passwords:

      User: elastic
      Password: la_elastic_503
      
      User: apm_system
      Password: la_apm_system_503
      
      User: kibana
      Password: la_kibana_503
      
      User: logstash_system
      Password: la_logstash_system_503
      
      User: beats_system
      Password: la_beats_system_503
      
      User: remote_monitoring_user
      Password: la_remote_monitoring_user_503
      
  6. Challenge

    Deploy Kibana on the `master-1` Node

    1. Download the Kibana 7.6 RPM:

      curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.6.0-x86_64.rpm
      
    2. Install Kibana:

      rpm --install kibana-7.6.0-x86_64.rpm
      
    3. Configure Kibana to start on system boot:

      systemctl enable kibana
      
  7. Challenge

    Configure Kibana to Bind to the Site-Local Address, Listen on Port 8080, and Connect to Elasticsearch

    1. Open the /etc/kibana/kibana file:

      vim /etc/kibana/kibana.yml
      
    2. Change the following line:

      #server.port: 5601
      

      to

      server.port: 8080
      
    3. Change the following line:

      #server.host: "localhost"
      

      to

      server.host: "10.0.1.101"
      
    4. Change the following lines:

      #elasticsearch.username: "kibana"
      #elasticsearch.password: "pass"
      

      to

      elasticsearch.username: "kibana"
      elasticsearch.password: "la_kibana_503"
      
    5. Save and close /etc/kibana/kibana.yml.

    6. Start Kibana:

      systemctl start kibana
      
    7. After Kibana has finished starting up, navigate to http://<PUBLIC_IP_ADDRESS_OF_MASTER-1>:8080 in your web browser and log in as:

      • Username: elastic
      • Password: la_elastic_503
  8. Challenge

    Deploy Metricbeat on Each Node

    1. Download the Metricbeat 7.6 RPM:

      curl -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.6.0-x86_64.rpm
      
    2. Install Metricbeat:

      rpm --install metricbeat-7.6.0-x86_64.rpm
      
    3. Configure Metricbeat to start on system boot:

      systemctl enable metricbeat
      
  9. Challenge

    Configure Metricbeat on Each Node to Use the System Module to Ingest System Telemetry to Elasticsearch and Visualize It in Kibana

    1. Open the /etc/metricbeat/metricbeat.yml file:

      vim /etc/metricbeat/metricbeat.yml
      
    2. Change the following lines on each node:

      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        #host: "localhost:5601"
      

      to

      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        host: "10.0.1.101:8080"
      
    3. Change the following lines on each node:

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["localhost:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        #username: "elastic"
        #password: "changeme"
      

      to

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["10.0.1.101:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        username: "elastic"
        password: "la_elastic_503"
      
    4. Save and close /etc/metricbeat/metricbeat.yml.

    5. Push the index templates and ingest pipelines to Elasticsearch and the module dashboards to Kibana:

      metricbeat setup
      
    6. Start Metricbeat:

      systemctl start metricbeat
      
  10. Challenge

    Deploy Filebeat on Each Node

    1. Download the Filebeat 7.6 RPM:

      curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.6.0-x86_64.rpm
      
    2. Install Filebeat:

      rpm --install filebeat-7.6.0-x86_64.rpm
      
    3. Configure Filebeat to start on system boot:

      systemctl enable filebeat
      
  11. Challenge

    Configure Filebeat on Each Node to Use the System Module to Ingest System Logs to Elasticsearch and Visualize Them in Kibana

    1. Open the /etc/filebeat/filebeat.yml file:

      vim /etc/filebeat/filebeat.yml
      
    2. Change the following lines on each node:

      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        #host: "localhost:5601"
      

      to

      setup.kibana:
      
        # Kibana Host
        # Scheme and port can be left out and will be set to the default (http and 5601)
        # In case you specify and additional path, the scheme is required: http://localhost:5601/path
        # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
        host: "10.0.1.101:8080"
      
    3. Change the following lines on each node:

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["localhost:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        #username: "elastic"
        #password: "changeme"
      

      to

      output.elasticsearch:
        # Array of hosts to connect to.
        hosts: ["10.0.1.101:9200"]
      
        # Protocol - either `http` (default) or `https`.
        #protocol: "https"
      
        # Authentication credentials - either API key or username/password.
        #api_key: "id:api_key"
        username: "elastic"
        password: "la_elastic_503"
      
    4. Save and close /etc/filebeat/filebeat.yml.

    5. Enable the system module on each node:

      filebeat modules enable system
      
    6. Push the index templates and ingest pipelines to Elasticsearch and the module dashboards to Kibana:

      filebeat setup
      
    7. Start Filebeat:

      systemctl start filebeat
      
  12. Challenge

    Use Kibana to Explore Your System Logs and Telemetry Data

    1. Navigate to http://<PUBLIC_IP_ADDRESS_OF_MASTER-1>:8080 in your web browser and log in as:

      • Username: elastic
      • Password: la_elastic_503
    2. On the side navigation bar, click on Dashboard.

    3. In the search bar, type "Filebeat System" or "Metricbeat System" to find your sample dashboards.

The Cloud Content team comprises subject matter experts hyper focused on services offered by the leading cloud vendors (AWS, GCP, and Azure), as well as cloud-related technologies such as Linux and DevOps. The team is thrilled to share their knowledge to help you build modern tech solutions from the ground up, secure and optimize your environments, and so much more!

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.

Start learning by doing today

View Plans