In this hands-on lab scenario, you are a data engineer for Awesome Company. Recently the company has been implementing a number of Azure Data Factory pipelines to move and transform data for a variety of purposes. As their usage grows it’s your duty to monitor for failed jobs and unexpected resource consumption. Performing the actions of this hands-on lab will help you become familiar with monitoring an Azure Data Factory.
Learning Objectives
Successfully complete this lab by achieving the following learning objectives:
- Create a Pipeline Trigger
- Set a trigger on the ProductArchivePipeline according to your timezone. Have it run at an interval, such as every three minutes.
- Stream Logs to a Storage Account
- Create a storage account, then configure the Azure Data Factory to store its platform logs and metrics there.
- Create Pipeline Alerts
- Create two alerts, one for successful pipeline runs and one for failed ones.
- After confirming successful runs, create an interruption in the pipeline that will cause it to fail. Then verify the alerts are working properly.
- View Real-Time Performance Data
- Use the Azure portal to view real-time CPU and Memory performance.