In Part 1 of the Elastic Certified Engineer practice exam, you will be tested on the following objectives:
* Deploy and start an Elasticsearch cluster that satisfies a given set of requirements
* Configure the nodes of a cluster to satisfy a given set of requirements
* Secure a cluster using Elasticsearch Security
* Define role-based access control using Elasticsearch Security
* Define an index that satisfies a given set of requirements
* Define and use index aliases
* Define and use an index template for a given pattern that satisfies a given set of requirements
* Define and use a dynamic template that satisfies a given set of requirements
* Define a mapping that satisfies a given set of requirements
* Define and use a custom analyzer that satisfies a given set of requirements
* Define and use multi-fields with different data types and/or analyzers
* Configure an index so that it properly maintains the relationships of nested arrays of objects
* Configure an index that implements a parent/child relationship
* Allocate the shards of an index to specific nodes based on a given set of requirements
* Configure Shard Allocation Awareness and Forced Awareness for an Index
* Configure a cluster for use with a hot/warm architecture
Learning Objectives
Successfully complete this lab by achieving the following learning objectives:
- Deploy and Start the 6-Node Cluster.
Deploy Elasticsearch
Using the Secure Shell (SSH), log in to each node as
cloud_user
via the public IP address.Open the
limits.conf
file as root:sudo vim /etc/security/limits.conf
Add the following line near the bottom:
elastic - nofile 65536
Open the
sysctl.conf
file asroot
:sudo vim /etc/sysctl.conf
Add the following line at the bottom:
vm.max_map_count=262144
Load the new sysctl values:
sudo sysctl -p
Become the
elastic
user:sudo su - elastic
Download the binaries for Elasticsearch 7.2.1 in the
elastic
user’s home directory:curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.1-linux-x86_64.tar.gz
Unpack the archive:
tar -xzvf elasticsearch-7.2.1-linux-x86_64.tar.gz
Remove the archive:
rm elasticsearch-7.2.1-linux-x86_64.tar.gz
Rename the unpacked directory:
mv elasticsearch-7.2.1 elasticsearch
Configure each node’s elasticsearch.yml
Open the
elasticsearch.yml
file:vim /home/elastic/elasticsearch/config/elasticsearch.yml
Change the following line:
#cluster.name: my-application
to
cluster.name: linux_academy
Change the following line on master-1:
#node.name: node-1
to
node.name: master-1
Change the following line on data-1:
#node.name: node-1
to
node.name: data-1
Change the following line on data-2:
#node.name: node-1
to
node.name: data-2
Change the following line on data-3:
#node.name: node-1
to
node.name: data-3
Change the following line on data-4:
#node.name: node-1
to
node.name: data-4
Change the following line on coordinator-1:
#node.name: node-1
to
node.name: coordinator-1
Change the following line on data-1:
#node.attr.rack: r1
to
node.attr.zone: 1
Add the following line on data-1:
node.attr.temp: hot
Change the following line on data-2:
#node.attr.rack: r1
to
node.attr.zone: 2
Add the following line on data-2:
node.attr.temp: hot
Change the following line on data-3:
#node.attr.rack: r1
to
node.attr.zone: 1
Add the following line on data-3:
node.attr.temp: warm
Change the following line on data-4:
#node.attr.rack: r1
to
node.attr.zone: 2
Add the following line on data-4:
node.attr.temp: warm
Add the following lines on master-1:
node.master: true node.data: false node.ingest: false
Add the following lines on data-1:
node.master: false node.data: true node.ingest: true
Add the following lines on data-2:
node.master: false node.data: true node.ingest: true
Add the following lines on data-3:
node.master: false node.data: true node.ingest: false
Add the following lines on data-4:
node.master: false node.data: true node.ingest: false
Add the following lines on coordinator-1:
node.master: false node.data: false node.ingest: false
Change the following on each node:
#network.host: 192.168.0.1
to
network.host: [_local_, _site_]
Change the following on each node:
#discovery.seed_hosts: ["host1", "host2"]
to
discovery.seed_hosts: ["10.0.1.101"]
Change the following on each node:
#cluster.initial_master_nodes: ["node-1", "node-2"]
to
cluster.initial_master_nodes: ["master-1"]
Configure the heap
Open the
jvm.options
file:vim /home/elastic/elasticsearch/config/jvm.options
Change the following lines:
-Xms1g -Xmx1g
to
-Xms2g -Xmx2g
Start Elasticsearch as a daemon on each node
Switch to the
elasticsearch
directory:cd /home/elastic/elasticsearch
Start Elasticsearch as a daemon:
./bin/elasticsearch -d -p pid
Deploy Kibana
On the
coordinator-1
node, download the binaries for Kibana 7.2.1 in theelastic
user’s home directory:cd /home/elastic curl -O https://artifacts.elastic.co/downloads/kibana/kibana-7.2.1-linux-x86_64.tar.gz
Unpack the archive:
tar -xzvf kibana-7.2.1-linux-x86_64.tar.gz
Remove the archive:
rm kibana-7.2.1-linux-x86_64.tar.gz
Rename the unpacked directory:
mv kibana-7.2.1-linux-x86_64 kibana
Configure the kibana.yml file
Open the
kibana.yml
file:vim /home/elastic/kibana/config/kibana.yml
Change the following line:
#server.port: 5601
to
server.port: 80
Change the following line:
#server.host: "localhost"
to
server.host: "10.0.1.106"
Start Kibana
Exit as the
elastic
user:exit
Become the
root
user:sudo su -
Start the Kibana server as
root
with:/home/elastic/kibana/bin/kibana --allow-root
- Secure the Cluster with X-Pack Security.
Generate a Certificate Authority (CA)
Using the Secure Shell (SSH), log in to each node as
cloud_user
via the public IP address.Become the
elastic
user with:sudo su - elastic
Create a
certs
directory on each node:mkdir /home/elastic/elasticsearch/config/certs
On the master-1 node, create a CA certificate with password
elastic_ca
in the newcerts
directory:/home/elastic/elasticsearch/bin/elasticsearch-certutil ca --out config/certs/ca --pass elastic_ca
Generate and deploy a certificate for each node
On the master-1 node, generate each node’s certificate with the CA:
/home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name master-1 --dns ip-10-1-101.ec2.internal --ip 10.0.1.101 --out config/certs/master-1 --pass elastic_master_1 /home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name data-1 --dns ip-10-1-102.ec2.internal --ip 10.0.1.102 --out config/certs/data-1 --pass elastic_data_1 /home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name data-2 --dns ip-10-1-103.ec2.internal --ip 10.0.1.103 --out config/certs/data-2 --pass elastic_data_2 /home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name data-3 --dns ip-10-1-104.ec2.internal --ip 10.0.1.104 --out config/certs/data-3 --pass elastic_data_3 /home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name data-4 --dns ip-10-1-105.ec2.internal --ip 10.0.1.105 --out config/certs/data-4 --pass elastic_data_4 /home/elastic/elasticsearch/bin/elasticsearch-certutil cert --ca config/certs/ca --ca-pass elastic_ca --name coordinator-1 --dns ip-10-1-106.ec2.internal --ip 10.0.1.106 --out config/certs/coordinator-1 --pass elastic_coordinator_1
On the master-1 node, remote copy each certificate to the
certs
directory created on each node:scp /home/elastic/elasticsearch/config/certs/data-1 10.0.1.102:/home/elastic/elasticsearch/config/certs scp /home/elastic/elasticsearch/config/certs/data-2 10.0.1.103:/home/elastic/elasticsearch/config/certs scp /home/elastic/elasticsearch/config/certs/data-3 10.0.1.104:/home/elastic/elasticsearch/config/certs scp /home/elastic/elasticsearch/config/certs/data-4 10.0.1.105:/home/elastic/elasticsearch/config/certs scp /home/elastic/elasticsearch/config/certs/coordinator-1 10.0.1.106:/home/elastic/elasticsearch/config/certs
Add the transport keystore password on each node:
echo "CERTIFICATE_PASSWORD_HERE" | /home/elastic/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.keystore.secure_password
Add the transport truststore password on each node:
echo "CERTIFICATE_PASSWORD_HERE" | /home/elastic/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.truststore.secure_password
Add the HTTP keystore password on each node:
echo "CERTIFICATE_PASSWORD_HERE" | /home/elastic/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.http.ssl.keystore.secure_password
Add the HTTP truststore password on each node:
echo "CERTIFICATE_PASSWORD_HERE" | /home/elastic/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.http.ssl.truststore.secure_password
Configure transport network encryption and restart Elasticsearch
Add the following to
/home/elastic/elasticsearch/config/elasticsearch.yml
on each node:# # ---------------------------------- X-Pack ------------------------------------ # xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: full xpack.security.transport.ssl.keystore.path: certs/CERTIFICATE_FILE_NAME_HERE xpack.security.transport.ssl.truststore.path: certs/CERTIFICATE_FILE_NAME_HERE
Stop Elasticsearch:
pkill -F /home/elastic/elasticsearch/pid
Start Elasticsearch as a background daemon and record the PID to a file:
/home/elastic/elasticsearch/bin/elasticsearch -d -p pid
Use the elasticsearch-setup-passwords tool to set the password for each built-in user
Set the built-in user passwords using the
elasticsearch-setup-passwords
utility on the master-1 node:/home/elastic/elasticsearch/bin/elasticsearch-setup-passwords interactive
Use the following passwords:
User: elastic Password: la_elastic_409 User: apm_system Password: la_apm_system_409 User: kibana Password: la_kibana_409 User: logstash_system Password: la_logstash_system_409 User: beats_system Password: la_beats_system_409 User: remote_monitoring_user Password: la_remote_monitoring_user_409
Configure HTTP network encryption and restart Elasticsearch
Add the following to
/home/elastic/elasticsearch/config/elasticsearch.yml
:xpack.security.http.ssl.enabled: true xpack.security.http.ssl.keystore.path: certs/CERTIFICATE_FILE_NAME_HERE xpack.security.http.ssl.truststore.path: certs/CERTIFICATE_FILE_NAME_HERE
Stop Elasticsearch:
pkill -F /home/elastic/elasticsearch/pid
Start Elasticsearch as a background daemon and record the PID to a file:
/home/elastic/elasticsearch/bin/elasticsearch -d -p pid
Configure Kibana
Open the
kibana.yml
file:vim /home/elastic/kibana/config/kibana.yml
Change the following lines:
elasticsearch.username: "elastic" elasticsearch.password: "yDaYCXL6KYgNligMpSwd"
to
elasticsearch.username: "kibana" elasticsearch.password: "la_kibana_409"
Change the following line:
#elasticsearch.hosts: ["http://localhost:9200"]
to
elasticsearch.hosts: ["https://localhost:9200"]
Change the following line:
#elasticsearch.ssl.verificationMode: full
to
elasticsearch.ssl.verificationMode: none
Restart Kibana
In the console with your Kibana instance running in the foreground, stop your Kibana instance with
ctrl+c
Start the Kibana server as
root
with:/home/elastic/kibana/bin/kibana --allow-root
- Create the Custom Role and User.
Create the
cluster_read
roleUse the Kibana console tool to execute the following:
POST /_security/role/cluster_read { "cluster": ["monitor"], "indices": [ { "names": ["logs-*"], "privileges": ["read", "monitor"] } ] }
Create the
terry
userUse the Kibana console tool to execute the following:
POST /_security/user/terry { "roles": ["kibana_user", "monitoring_user", "cluster_read"], "full_name": "Terry Cox", "email": "terry@linuxacademy.com", "password": "scaryterry123" }
- Create the “logs” Index Template.
Use the Kibana console tool to execute the following:
PUT _template/logs { "index_patterns": ["logs-*"], "aliases": { "logs": {} }, "mappings": { "dynamic_templates": [ { "strings_as_keywords": { "match_mapping_type": "string", "mapping": { "type": "keyword" } } } ], "properties": { "referrer": { "type": "join", "relations": { "referred_to": "referred_by" } }, "body": { "type": "text", "fields": { "html": { "type": "text", "analyzer": "html" } } }, "url": { "type": "text", "analyzer": "simple", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } }, "geoip": { "properties": { "coordinates": { "type": "geo_point" } } }, "client_ip": { "type": "ip" }, "related_content": { "type": "nested" }, "useragent": { "type": "text", "fields": { "keyword": { "type": "keyword", "ignore_above": 256 } } } } }, "settings": { "number_of_shards": 2, "number_of_replicas": 1, "index.routing.allocation.require.temp": "hot", "analysis": { "analyzer": { "html": { "type": "custom", "tokenizer": "standard", "char_filter": "html_strip", "filter": "lowercase" } } } } }
- Create the “logs” Indexes.
Create the
logs-2018-10-01
IndexUse the Kibana console tool to execute the following:
PUT logs-2018-10-01 PUT logs-2018-10-01/_settings { "index.routing.allocation.require.temp": "warm" }
Create the
logs-2018-10-02
IndexPUT logs-2018-10-02/_doc/0 { "url": "https://linuxacademy.com/courses/elastic-certified-engineer", "response_code": "200", "bytes": 16384, "client_ip": "10.0.1.100", "geoip.coordinates": "32.9259,97.2531", "useragent": "Mozilla/5.0 (Macintosh; Intel Mac OS X x.y; rv:42.0) Gecko/20100101 Firefox/42.0", "method": "GET", "request_time": 84, "body": "<body><h1>Elastic Certified Engineer</h1></body>", "referrer": { "name": "referred_to" } }
- Configure Shard Allocation Awareness.
Use the Kibana console tool to execute the following:
PUT _cluster/settings { "persistent": { "cluster.routing.allocation.awareness.attributes": "zone", "cluster.routing.allocation.awareness.force.zone.values": "1,2" } }