ECE Practice Exam — Part 2

4 hours
  • 8 Learning Objectives

About this Hands-on Lab

In Part 2 of the Elastic Certified Engineer practice exam, you will be tested on the following exam objectives:

* Perform index, create, read, update, and delete operations on the documents of an index
* Use the Reindex API and Update By Query API to reindex and/or update documents
* Define and use an ingest pipeline that satisfies a given set of requirements, including the use of Painless to modify documents
* Diagnose Shard Issues and Repair a Cluster’s Health
* Write and execute a search query for terms and/or phrases in one or more fields of an index
* Write and execute a search query that is a Boolean combination of multiple queries and filters
* Highlight the search terms in the response of a query
* Sort the results of a query by a given set of requirements
* Implement pagination in the results of a search query
* Apply fuzzy matching to a query
* Define and Use a Search Template
* Write and execute a query that searches multiple clusters
* Write and execute metric and bucket aggregations
* Write and execute aggregations that contain sub-aggregations
* Write and execute pipeline aggregations
* Back up and restore a cluster and/or specific indices
* Configure a cluster for cross-cluster search

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Diagnose and Repair the “c1” cluster.

Start Elasticsearch

Using the Secure Shell (SSH), log in to the c1-data-1 node as cloud_user via the public IP address.

Become the elastic user:

sudo su - elastic

Start Elasticsearch as a daemon:

/home/elastic/elasticsearch/bin/elasticsearch -d -p pid

Replicate the logs index

Use the Kibana console tool on the c1 cluster to execute the following:

PUT logs/_settings
{
  "number_of_replicas": 1
}

Reduce the shakespeare index’s replication

Use the Kibana console tool on the c1 cluster to execute the following:

PUT shakespeare/_settings
{
  "number_of_replicas": 1
}

Remove allocation filtering for the bank index

Use the Kibana console tool on the c1 cluster to execute the following:

PUT bank/_settings
{
  "index.routing.allocation.require._name": null
}
Transfer the “bank” index to the “c2” cluster.

Configure the c2 cluster to remote reindex from the c1 cluster

Using the Secure Shell (SSH), log in to the c2 cluster nodes as cloud_user via the public IP address.

Become the elastic user:

sudo su - elastic

Add the following line to /home/elastic/elasticsearch/config/elasticsearch.yml:

reindex.remote.whitelist: "10.0.1.101:9200, 10.0.1.102:9200, 10.0.1.103:9200, 10.0.1.104:9200"

Stop Elasticsearch:

pkill -F /home/elastic/elasticsearch/pid

Start Elasticsearch as a background daemon and record the PID to a file:

/home/elastic/elasticsearch/bin/elasticsearch -d -p pid

Create the bank index on the c2 cluster

Use the Kibana console tool on the c2 cluster to execute the following:

PUT bank
{
  "settings": {
    "number_of_shards": 1,
    "number_of_replicas": 0
  }
}

Reindex the bank index on the c2 cluster

Use the Kibana console tool on the c2 cluster to execute the following:

POST _reindex
{
  "source": {
    "remote": {
      "host": "http://10.0.1.101:9200",
      "username": "elastic",
      "password": "la_elastic_409"
    },
    "index": "bank"
  },
  "dest": {
    "index": "bank"
  }
}

Delete the bank index on the c1 cluster

Use the Kibana console tool on the c1 cluster to execute the following:

DELETE bank
Backup the “bank” index on the “c2” cluster.

Configure the nodes

Using the Secure Shell (SSH), log in to the c2-master-1 node as cloud_user via the public IP address.

Become the elastic user:

sudo su - elastic

Create the repo directory:

mkdir /home/elastic/snapshots

Add the following line to /home/elastic/elasticsearch/config/elasticsearch.yml:

path.repo: "/home/elastic/snapshots"

Stop Elasticsearch:

pkill -F /home/elastic/elasticsearch/pid

Start Elasticsearch as a background daemon and record the PID to a file:

/home/elastic/elasticsearch/bin/elasticsearch -d -p pid

Create the local_repo repository

Use the Kibana console tool on the c2 cluster to execute the following:

PUT _snapshot/local_repo
{
  "type": "fs",
  "settings": {
    "location": "/home/elastic/snapshots"
  }
}

Backup the bank index

Use the Kibana console tool on the c2 cluster to execute the following:

PUT _snapshot/local_repo/bank_1?wait_for_completion=true
{
  "indices": "bank", 
  "include_global_state": true
}
Configure Cross-Cluster Search.

Use the Kibana console tool on the c1 cluster to execute the following:

PUT _cluster/settings
{
  "persistent": {
    "cluster": {
      "remote": {
        "c2": {
          "seeds": [
            "10.0.1.105:9300"
          ]
        }
      }
    }
  }
}
Create, Update, and Delete Documents.

Delete the bank documents

Use the Kibana console tool on the c2 cluster to execute the following:

DELETE bank/_doc/5
DELETE bank/_doc/27
DELETE bank/_doc/819

Update the bank document

Use the Kibana console tool on the c2 cluster to execute the following:

POST bank/_update/67
{
  "doc": {
    "lastname": "Alonso"
  }
}

Create the bank document

Use the Kibana console tool on the c2 cluster to execute the following:

PUT bank/_doc/1000
{
  "account_number": 1000,
  "balance": 35550,
  "firstname": "Stosh",
  "lastname": "Pearson",
  "age": 45,
  "gender": "M",
  "address": "125 Bear Creek Pkwy",
  "employer": "Linux Academy",
  "email": "s[email protected]",
  "city": "Keller",
  "state": "TX"
}

Update the shakespeare mapping

Use the Kibana console tool on the c1 cluster to execute the following:

PUT shakespeare/_mappings
{
  "properties": {
    "line_id": {
      "type": "integer"
    },
    "line_number": {
      "type": "text",
      "fields": {
        "keyword": {
          "type": "keyword",
          "ignore_above": 256
        }
      }
    },
    "play_name": {
      "type": "keyword"
    },
    "speaker": {
      "type": "keyword"
    },
    "speech_number": {
      "type": "integer"
    },
    "text_entry": {
      "type": "text",
      "fields": {
        "english": {
          "type": "text",
          "analyzer": "english"
        },
        "keyword": {
          "type": "keyword",
          "ignore_above": 256
        }
      }
    },
    "type": {
      "type": "text",
      "fields": {
        "keyword": {
          "type": "keyword",
          "ignore_above": 256
        }
      }
    }
  }
}

Delete and update the shakespeare documents

Use the Kibana console tool on the c1 cluster to execute the following:

POST shakespeare/_update_by_query
{
  "script": {
    "lang": "painless",
    "source": """
      if (ctx._source.line_number == "") {
        ctx.op = "delete"
      }
    """
  }
}

Create the ingest pipeline

Use the Kibana console tool on the c1 cluster to execute the following:

PUT _ingest/pipeline/fix_logs
{
  "processors": [
    {
      "remove": {
        "field": "@message"
      }
    },
    {
      "split": {
        "field": "spaces",
        "separator": "\s+"
      }
    },
    {
      "script": {
        "lang": "painless",
        "source": "ctx.relatedContent_count = ctx.relatedContent.length"
      }
    },
    {
      "uppercase": {
        "field": "extension"
      }
    }
  ]
}

Create the logs_new index

Use the Kibana console tool on the c1 cluster to execute the following:

PUT logs_new
{
  "settings": {
    "number_of_shards": 2,
    "number_of_replicas": 1
  }
}

Reindex the logs documents

Use the Kibana console tool on the c1 cluster to execute the following:

POST _reindex
{
  "source": {
    "index": "logs"
  },
  "dest": {
    "index": "logs_new",
    "pipeline": "fix_logs"
  }
}
Search Documents.

Search the bank index

Use the Kibana console tool on the c1 cluster to execute the following:

GET c2:bank/_search
{
  "from": 0,
  "size": 50,
  "sort": [
    {
      "age": {
        "order": "asc"
      }
    },
    {
      "balance": {
        "order": "desc"
      }
    },
    {
      "lastname.keyword": {
        "order": "asc"
      }
    }
  ], 
  "query": {
    "bool": {
      "must": [
        {
          "term": {
            "gender.keyword": {
              "value": "F"
            }
          }
        },
        {
          "range": {
            "balance": {
              "gt": 10000
            }
          }
        }
      ],
      "must_not": [
        {
          "terms": {
            "state.keyword": ["PA", "VA", "IL"]
          }
        }
      ],
      "filter": {
        "range": {
          "age": {
            "gte": 18,
            "lte": 35
          }
        }
      }
    }
  }
}

Search the shakespeare index

Use the Kibana console tool on the c1 cluster to execute the following:

GET shakespeare/_search
{
  "from": 0,
  "size": 20, 
  "highlight": {
    "pre_tags": "<b>",
    "post_tags": "</b>",
    "fields": {
      "text_entry.english": {}
    }
  },
  "query": {
    "bool": {
      "should": [
        {
          "match": {
            "text_entry.english": "life"
          }
        },
        {
          "match": {
            "text_entry.english": "love"
          }
        },
        {
          "match": {
            "text_entry.english": "death"
          }
        }
      ],
      "minimum_should_match": 2
    }
  }
}

Search the logs index

Use the Kibana console tool on the c1 cluster to execute the following:

GET logs/_search
{
  "highlight": {
    "fields": {
      "relatedContent.twitter:description": {},
      "relatedContent.twitter:title": {}
    }
  },
  "query": {
    "bool": {
      "must": [
        {
          "match": {
            "relatedContent.twitter:description": {
              "query": "never",
              "fuzziness": 2
            }
          }
        },
        {
          "match_phrase": {
            "relatedContent.twitter:title": "Golden State"
          }
        }
      ]
    }
  }
}
Aggregate Documents.

Aggregate on the bank index

Use the Kibana console tool on the c1 cluster to execute the following:

GET c2:bank/_search
{
  "size": 0, 
  "aggs": {
    "state": {
      "terms": {
        "field": "state.keyword",
        "size": 5,
        "order": {
          "avg_balance": "desc"
        }
      },
      "aggs": {
        "avg_balance": {
          "avg": {
            "field": "balance"
          }
        }
      }
    }
  },
  "query": {
    "range": {
      "age": {
        "gte": 30
      }
    }
  }
}

Aggregate on the shakespeare index

Use the Kibana console tool on the c1 cluster to execute the following:

GET shakespeare/_search
{
  "size": 0, 
  "aggs": {
    "plays": {
      "terms": {
        "field": "play_name",
        "size": 10
      },
      "aggs": {
        "speakers": {
          "cardinality": {
            "field": "speaker"
          }
        }
      }
    },
    "most_parts": {
      "max_bucket": {
        "buckets_path": "plays>speakers"
      }
    }
  }
}

Aggregate on the logs index

Use the Kibana console tool on the c1 cluster to execute the following:

GET logs/_search
{
  "size": 0,
  "aggs": {
    "hour": {
      "date_histogram": {
        "field": "@timestamp",
        "calendar_interval": "hour"
      },
      "aggs": {
        "clients": {
          "cardinality": {
            "field": "clientip.keyword"
          }
        },
        "cumulative_clients": {
          "cumulative_sum": {
            "buckets_path": "clients"
          }
        },
        "clients_per_minute": {
          "derivative": {
            "buckets_path": "cumulative_clients",
            "unit": "1m"
          }
        }
      }
    },
    "peak": {
      "max_bucket": {
        "buckets_path": "hour>clients"
      }
    }
  },
  "query": {
    "range": {
      "@timestamp": {
        "gte": "2015-05-19",
        "lt": "2015-05-20",
        "format": "yyyy-MM-dd"
      }
    }
  }
}
Create the Search Template.

Use the Kibana console tool on the c2 cluster to execute the following:

POST _scripts/accounts_search
{
  "script": {
    "lang": "mustache",
    "source": {
      "from": "{{from}}{{^from}}0{{/from}}",
      "size": "{{size}}{{^size}}25{{/size}}",
      "query": {
        "bool": {
          "must": [
            {
              "wildcard": {
                "firstname.keyword": "{{first_name}}{{^first_name}}*{{/first_name}}"
              }
            },
            {
              "wildcard": {
                "lastname.keyword": "{{last_name}}{{^last_name}}*{{/last_name}}"
              }
            }
          ]
        }
      }
    }
  }
}

Additional Resources

Diagnose and repair the c1 cluster

Use Elasticsearch APIs to diagnose and repair all issues on the c1 cluster. All indices should be green and any indices that were previously red should be configured to fail to a yellow state should the same failures happen again.

Transfer the bank index to the c2 cluster

The bank index on the c1 cluster needs to be segmented for additional security. Transfer all documents to an index of the same name on the c2 cluster with a green state. Once all documents have been transferred to the c2 cluster, the bank index should be removed from the c1 cluster.

Back Up the bank index on the c2 cluster

Configure the c2 cluster to back up to the local_repo at /home/elastic/snapshots. Then, back up the bank index with the cluster state to a snapshot called bank_1 to the local_repo repository.

Configure Cross-Cluster Search

Configure the c1 cluster so that it can search documents in the c2 cluster.

Create, Update, and Delete Documents

For the bank index, perform the following:

  • Delete the documents where acccount_number is 5, 27, and 819
  • Update the lastname to "Alonso" for the document where account_number is 67
  • Create the following account:
    • account number: 1000
    • balance: 35550
    • first name: "Stosh"
    • last name: "Pearson"
    • age: 45
    • gender: "M"
    • address: "125 Bear Creek Pkwy"
    • employer: "Linux Academy"
    • email: "[email protected]"
    • city: "Keller"
    • state: "TX"

For the shakespeare index, perform the following:

  • Delete all documents where line_number is equal to ""
  • Add the text_entry.english multi-field that uses the english analyzer
  • Update every document in the index to pick up the new multi-field mapping

On the c1 cluster, create a pipeline called fix_logs, and use it to reindex the logs index to logs_new with the following changes:

  • Delete the @message field
  • The spaces field should be converted to an array of terms separated by whitespace
  • Create a relatedContent_count field that is equal to the number of objects in the relatedContent object array
  • Capitalize the values of the extension field

Search Documents

Create and execute a search request on the c1 cluster for the bank index that meets the following requirements:

  • gender must be female
  • balance must be greater than but not equal to 10000
  • state must not be "PA", "VA", or "IL"
  • age must be between or equal to 18 and 35 without affecting the relevancy score
  • Return the first page of results with 50 results per page
  • Results should be ordered first by age ascending, then by balance descending, and lastly by lastname ascending

Create and execute a search request on the c1 cluster for the shakespeare index that meets the following requirements:

  • At least two of the following words should match for the english-analyzed text_entry field: "life", "love", and "death"
  • The search results should highlight the matched words with HTML bold tags (e.g., <b>text</b>) from the english-analyzed text_entry field
  • Return the first page of results with 20 results per page

Create and execute a search request on the c1 cluster for the logs index that meets the following requirements:

  • Match the word "never" in the field relatedContent.twitter:description within two character changes
  • Match the phrase "Golden State" in the field relatedContent.twitter:title
  • The search results should highlight the matched words from the fields relatedContent.twitter:description and relatedContent.twitter:title

Aggregate Documents

Create a single search request on the c1 cluster for the bank index that answers the following question: What is the average account balance per state for the top 5 states with the most account holders who are 30 years or older (listed in descending order by average balance)? The search results should not return any documents.

Create a single search request on the c1 cluster for the shakespeare index that answers the following question: How many parts are in each of the top 10 Shakespeare plays (ordered by the number of lines in each play), and which play in the top 10 has the most parts? The search results should not return any documents.

Create a single search request on the c1 cluster for the logs index that answers the following question: On the day May 19th 2015, what was the average number of unique clients per minute for each hour in the day (in chronological order), and which hour had the most clients? The search results should not return any documents.

Create a search template

Create the search template accounts_search on the c2 cluster for the bank index as follows:

  • The current page of results should be abstracted with the from parameter and a default value of 0
  • The number of results per page should be abstracted with the size parameter and a default value of 25
  • The first and last name of the account holder should be searchable using the first_name and last_name parameters.
  • All documents should be matched by default for the first and last name queries.

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?