ECE Practice Exam — Part 1

4 hours
  • 6 Learning Objectives

About this Hands-on Lab

In Part 1 of the Elastic Certified Engineer practice exam, you will be tested on the following objectives:

* Deploy and start an Elasticsearch cluster that satisfies a given set of requirements
* Configure the nodes of a cluster to satisfy a given set of requirements
* Secure a cluster using Elasticsearch Security
* Define role-based access control using Elasticsearch Security
* Define an index that satisfies a given set of requirements
* Define and use index aliases
* Define and use an index template for a given pattern that satisfies a given set of requirements
* Define and use a dynamic template that satisfies a given set of requirements
* Define a mapping that satisfies a given set of requirements
* Define and use a custom analyzer that satisfies a given set of requirements
* Define and use multi-fields with different data types and/or analyzers
* Configure an index so that it properly maintains the relationships of nested arrays of objects
* Configure an index that implements a parent/child relationship
* Allocate the shards of an index to specific nodes based on a given set of requirements
* Configure Shard Allocation Awareness and Forced Awareness for an Index
* Configure a cluster for use with a hot/warm architecture

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Deploy and Start the 6-Node Cluster.

Deploy Elasticsearch

Using the Secure Shell (SSH), log in to each node as cloud_user via the public IP address.

Open the limits.conf file as root:

sudo vim /etc/security/limits.conf

Add the following line near the bottom:

elastic - nofile 65536

Open the sysctl.conf file as root:

sudo vim /etc/sysctl.conf

Add the following line at the bottom:

vm.max_map_count=262144

Load the new sysctl values:

sudo sysctl -p

Become the elastic user:

sudo su - elastic

Download the binaries for Elasticsearch 7.2.1 in the elastic user’s home directory:

curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.1-linux-x86_64.tar.gz

Unpack the archive:

tar -xzvf elasticsearch-7.2.1-linux-x86_64.tar.gz

Remove the archive:

rm elasticsearch-7.2.1-linux-x86_64.tar.gz

Rename the unpacked directory:

mv elasticsearch-7.2.1 elasticsearch

Configure each node’s elasticsearch.yml

Open the elasticsearch.yml file:

vim /home/elastic/elasticsearch/config/elasticsearch.yml

Change the following line:

#cluster.name: my-application

to

cluster.name: linux_academy

Change the following line on master-1:

#node.name: node-1

to

node.name: master-1

Change the following line on data-1:

#node.name: node-1

to

node.name: data-1

Change the following line on data-2:

#node.name: node-1

to

node.name: data-2

Change the following line on data-3:

#node.name: node-1

to

node.name: data-3

Change the following line on data-4:

#node.name: node-1

to

node.name: data-4

Change the following line on coordinator-1:

#node.name: node-1

to

node.name: coordinator-1

Change the following line on data-1:

#node.attr.rack: r1

to

node.attr.zone: 1

Add the following line on data-1:

node.attr.temp: hot

Change the following line on data-2:

#node.attr.rack: r1

to

node.attr.zone: 2

Add the following line on data-2:

node.attr.temp: hot

Change the following line on data-3:

#node.attr.rack: r1

to

node.attr.zone: 1

Add the following line on data-3:

node.attr.temp: warm

Change the following line on data-4:

#node.attr.rack: r1

to

node.attr.zone: 2

Add the following line on data-4:

node.attr.temp: warm

Add the following lines on master-1:

node.master: true
node.data: false
node.ingest: false

Add the following lines on data-1:

node.master: false
node.data: true
node.ingest: true

Add the following lines on data-2:

node.master: false
node.data: true
node.ingest: true

Add the following lines on data-3:

node.master: false
node.data: true
node.ingest: false

Add the following lines on data-4:

node.master: false
node.data: true
node.ingest: false

Add the following lines on coordinator-1:

node.master: false
node.data: false
node.ingest: false

Change the following on each node:

#network.host: 192.168.0.1

to

network.host: [_local_, _site_]

Change the following on each node:

#discovery.seed_hosts: ["host1", "host2"]

to

discovery.seed_hosts: ["10.0.1.101"]

Change the following on each node:

#cluster.initial_master_nodes: ["node-1", "node-2"]

to

cluster.initial_master_nodes: ["master-1"]

Configure the heap

Open the jvm.options file:

vim /home/elastic/elasticsearch/config/jvm.options

Change the following lines:

-Xms1g
-Xmx1g

to

-Xms2g
-Xmx2g

Start Elasticsearch as a daemon on each node

Switch to the elasticsearch directory:

cd /home/elastic/elasticsearch

Start Elasticsearch as a daemon:

./bin/elasticsearch -d -p pid

Deploy Kibana

On the coordinator-1 node, download the binaries for Kibana 7.2.1 in the elastic user’s home directory:

cd /home/elastic
curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.2.1-linux-x86_64.tar.gz

Unpack the archive:

tar -xzvf kibana-7.2.1-linux-x86_64.tar.gz

Remove the archive:

rm kibana-7.2.1-linux-x86_64.tar.gz

Rename the unpacked directory:

mv kibana-7.2.1-linux-x86_64 kibana

Configure the kibana.yml file

Open the kibana.yml file:

vim /home/elastic/kibana/config/kibana.yml

Change the following line:

#server.port: 5601

to

server.port: 80

Change the following line:

#server.host: "localhost"

to

server.host: "10.0.1.106"

Start Kibana

Exit as the elastic user:

exit

Become the root user:

sudo su -

Start the Kibana server as root with:

/home/elastic/kibana/bin/kibana --allow-root
Secure the Cluster with X-Pack Security.

Generate a Certificate Authority (CA)

Using the Secure Shell (SSH), log in to each node as cloud_user via the public IP address.

Become the elastic user with:

sudo su - elastic

Create a certs directory on each node:

mkdir /home/elastic/elasticsearch/config/certs

On the master-1 node, create a CA certificate with password elastic_ca in the new certs directory:

/home/elastic/elasticsearch/bin/elasticsearch-certutil ca --out config/certs/ca --pass elastic_ca

Generate and deploy a certificate for each node

On the master-1 node, generate each node’s certificate with the CA:

/home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name master-1 --dns ip-10-1-101.ec2.internal --ip 10.0.1.101 --out config/certs/master-1 --pass elastic_master_1
/home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name data-1 --dns ip-10-1-102.ec2.internal --ip 10.0.1.102 --out config/certs/data-1 --pass elastic_data_1
/home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name data-2 --dns ip-10-1-103.ec2.internal --ip 10.0.1.103 --out config/certs/data-2 --pass elastic_data_2
/home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name data-3 --dns ip-10-1-104.ec2.internal --ip 10.0.1.104 --out config/certs/data-3 --pass elastic_data_3
/home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name data-4 --dns ip-10-1-105.ec2.internal --ip 10.0.1.105 --out config/certs/data-4 --pass elastic_data_4
/home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name coordinator-1 --dns ip-10-1-106.ec2.internal --ip 10.0.1.106 --out config/certs/coordinator-1 --pass elastic_coordinator_1

On the master-1 node, remote copy each certificate to the certs directory created on each node:

scp /home/elastic/elasticsearch/config/certs/data-1 10.0.1.102:/home/elastic/elasticsearch/config/certs
scp /home/elastic/elasticsearch/config/certs/data-2 10.0.1.103:/home/elastic/elasticsearch/config/certs
scp /home/elastic/elasticsearch/config/certs/data-3 10.0.1.104:/home/elastic/elasticsearch/config/certs
scp /home/elastic/elasticsearch/config/certs/data-4 10.0.1.105:/home/elastic/elasticsearch/config/certs
scp /home/elastic/elasticsearch/config/certs/coordinator-1 10.0.1.106:/home/elastic/elasticsearch/config/certs

Add the transport keystore password on each node:

echo "CERTIFICATE_PASSWORD_HERE" | /home/elastic/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.keystore.secure_password

Add the transport truststore password on each node:

echo "CERTIFICATE_PASSWORD_HERE" | /home/elastic/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.truststore.secure_password

Add the HTTP keystore password on each node:

echo "CERTIFICATE_PASSWORD_HERE" | /home/elastic/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.http.ssl.keystore.secure_password

Add the HTTP truststore password on each node:

echo "CERTIFICATE_PASSWORD_HERE" | /home/elastic/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.http.ssl.truststore.secure_password

Configure transport network encryption and restart Elasticsearch

Add the following to /home/elastic/elasticsearch/config/elasticsearch.yml on each node:

#
# ---------------------------------- X-Pack ------------------------------------
#
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: full
xpack.security.transport.ssl.keystore.path: certs/CERTIFICATE_FILE_NAME_HERE
xpack.security.transport.ssl.truststore.path: certs/CERTIFICATE_FILE_NAME_HERE

Stop Elasticsearch:

pkill -F /home/elastic/elasticsearch/pid

Start Elasticsearch as a background daemon and record the PID to a file:

/home/elastic/elasticsearch/bin/elasticsearch -d -p pid

Use the elasticsearch-setup-passwords tool to set the password for each built-in user

Set the built-in user passwords using the elasticsearch-setup-passwords utility on the master-1 node:

/home/elastic/elasticsearch/bin/elasticsearch-setup-passwords interactive

Use the following passwords:

User: elastic
Password: la_elastic_409

User: apm_system
Password: la_apm_system_409

User: kibana
Password: la_kibana_409

User: logstash_system
Password: la_logstash_system_409

User: beats_system
Password: la_beats_system_409

User: remote_monitoring_user
Password: la_remote_monitoring_user_409

Configure HTTP network encryption and restart Elasticsearch

Add the following to /home/elastic/elasticsearch/config/elasticsearch.yml:

xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/CERTIFICATE_FILE_NAME_HERE
xpack.security.http.ssl.truststore.path: certs/CERTIFICATE_FILE_NAME_HERE

Stop Elasticsearch:

pkill -F /home/elastic/elasticsearch/pid

Start Elasticsearch as a background daemon and record the PID to a file:

/home/elastic/elasticsearch/bin/elasticsearch -d -p pid

Configure Kibana

Open the kibana.yml file:

vim /home/elastic/kibana/config/kibana.yml

Change the following lines:

elasticsearch.username: "elastic"
elasticsearch.password: "yDaYCXL6KYgNligMpSwd"

to

elasticsearch.username: "kibana"
elasticsearch.password: "la_kibana_409"

Change the following line:

#elasticsearch.hosts: ["http://localhost:9200"]

to

elasticsearch.hosts: ["https://localhost:9200"]

Change the following line:

#elasticsearch.ssl.verificationMode: full

to

elasticsearch.ssl.verificationMode: none

Restart Kibana

In the console with your Kibana instance running in the foreground, stop your Kibana instance with ctrl+c

Start the Kibana server as root with:

/home/elastic/kibana/bin/kibana --allow-root
Create the Custom Role and User.

Create the cluster_read role

Use the Kibana console tool to execute the following:

POST /_security/role/cluster_read
{
  "cluster": ["monitor"],
  "indices": [
    {
      "names": ["logs-*"],
      "privileges": ["read", "monitor"]
    }
  ]
}

Create the terry user

Use the Kibana console tool to execute the following:

POST /_security/user/terry
{
  "roles": ["kibana_user", "monitoring_user", "cluster_read"],
  "full_name": "Terry Cox",
  "email": "terry@linuxacademy.com",
  "password": "scaryterry123"
}
Create the “logs” Index Template.

Use the Kibana console tool to execute the following:

PUT _template/logs
{
  "index_patterns": ["logs-*"],
  "aliases": {
    "logs": {}
  },
  "mappings": {
    "dynamic_templates": [
      {
        "strings_as_keywords": {
          "match_mapping_type": "string",
          "mapping": {
            "type": "keyword"
          }
        }
      }
    ],
    "properties": {
      "referrer": {
        "type": "join",
        "relations": {
          "referred_to": "referred_by"
        }
      },
      "body": {
        "type": "text",
        "fields": {
          "html": {
            "type": "text",
            "analyzer": "html"
          }
        }
      },
      "url": {
        "type": "text",
        "analyzer": "simple",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      },
      "geoip": {
        "properties": {
          "coordinates": {
            "type": "geo_point"
          }
        }
      },
      "client_ip": {
        "type": "ip"
      },
      "related_content": {
        "type": "nested"
      },
      "useragent": {
        "type": "text",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": 256
          }
        }
      }
    }
  },
  "settings": {
    "number_of_shards": 2,
    "number_of_replicas": 1,
    "index.routing.allocation.require.temp": "hot",
    "analysis": {
      "analyzer": {
        "html": {
          "type": "custom",
          "tokenizer": "standard",
          "char_filter": "html_strip",
          "filter": "lowercase"
        }
      }
    }
  }
}
Create the “logs” Indexes.

Create the logs-2018-10-01 Index

Use the Kibana console tool to execute the following:

PUT logs-2018-10-01
PUT logs-2018-10-01/_settings
{
  "index.routing.allocation.require.temp": "warm"
}

Create the logs-2018-10-02 Index

PUT logs-2018-10-02/_doc/0
{
  "url": "https://linuxacademy.com/courses/elastic-certified-engineer",
  "response_code": "200",
  "bytes": 16384,
  "client_ip": "10.0.1.100",
  "geoip.coordinates": "32.9259,97.2531",
  "useragent": "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:42.0) Gecko/20100101 Firefox/42.0",
  "method": "GET",
  "request_time": 84,
  "body": "<body><h1>Elastic Certified Engineer</h1></body>",
  "referrer": {
    "name": "referred_to"
  }
}
Configure Shard Allocation Awareness.

Use the Kibana console tool to execute the following:

PUT _cluster/settings
{
  "persistent": {
    "cluster.routing.allocation.awareness.attributes": "zone",
    "cluster.routing.allocation.awareness.force.zone.values": "1,2"
  }
}

Additional Resources

Deploy and Start a 6-Node Cluster

Deploy a 6-node Elasticsearch-7.2.1 cluster with a Kibana-7.2.1 node using the architecture diagram for this learning activity. Start Elasticsearch on all 6 nodes as a background process running as the elastic user. Configure Kibana to bind to the site local IP address and listen on port 80. Start Kibana as the root user.

To use Kibana, navigate to the public IP address of the coordinator-1 node in your web browser.

Secure a Cluster with Elasticsearch Security

Secure your 6-node Elasticsearch cluster to enable transport network encryption and HTTP network encryption with full verification.

Set the built-in user passwords as follows:

  • elastic: la_elastic_409
  • apm_system: la_apm_system_409
  • kibana: la_kibana_409
  • logstash_system: la_logstash_system_409
  • beats_system: la_beats_system_409
  • remote_monitoring_user: la_remote_monitoring_user_409

Set the certificate passwords as follows:

  • CA: elastic_ca
  • Master-1: elastic_master_1
  • Data-1: elastic_data_1
  • Data-2: elastic_data_2
  • Data-3: elastic_data_3
  • Data-4: elastic_data_4
  • Coordinator-1: elastic_coordinator_1

Configure Kibana to connect to your newly secured cluster. Since we will be using a self-signed certificate for the HTTP network, set Kibana's SSL verification to none.

To use Kibana, navigate to the public IP address of the coordinator-1 node in your web browser and login with:

  • Username: elastic
  • Password: la_elastic_409

Create a Custom Role and User

Create the cluster_read role with:

  • monitor cluster permissions
  • read and monitor permissions on any logs-* index

Create the terry user with the following attributes:

  • fullname: Terry Cox
  • email: terry@linuxacademy.com
  • password: scaryterry123
  • roles: kibana_user, monitoring_user, and cluster_read

Create the logs Index Template

The template should meet the following requirements:

  • Matches on all indices that start with logs-
  • Contains a custom analyzer called html with the standard tokenizer, html_strip character filter, and lowercase token filter
  • Maps the field referrer as a parent/child relationship for the relation of parent type referred_to and child type referred_by
  • Maps the field body as text with the standard analyzer and a text multi-field called html with the html analyzer
  • Maps the field url as text with the simple analyzer and a keyword multi-field called keyword with up to 256 characters
  • Maps the field geoip.coordinates as geo_point
  • Maps the field client_ip as ip
  • Maps the field related_content so that it maintains the relationship of nested arrays of objects
  • Maps the field useragent as a text field and a keyword multi-field called keyword with up to 256 characters
  • Maps all other string fields to non-multi-field keyword fields
  • Creates indexes with 2 primary shards
  • Creates indexes with 1 replica shards
  • Creates all new indexes on the hot data nodes
  • Creates all indexes with the alias logs

Create the logs Indexes

Create a logs-2018-10-01 index and allocate it to the warm nodes.

Create a logs-2018-10-02 index with a document of id 0 and the following field values:

  • url: "https://linuxacademy.com/courses/elastic-certified-engineer"
  • response code: "200"
  • bytes: 16384
  • client ip: "10.0.1.100"
  • geoip coordinates: "32.9259,97.2531"
  • useragent: "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:42.0) Gecko/20100101 Firefox/42.0"
  • method: "GET"
  • request time: 84
  • body: "<body><h1>Elastic Certified Engineer</h1></body>"
  • referrer name: "referred_to"

Configure Shard Allocation Awareness

Configure the cluster for shard allocation awareness on the zone attribute. Make sure that if only one zone is available, that the cluster does not try to reallocate missing replica shards to the remaining zone in order to not overwhelm the remaining cluster nodes.

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?