Categorizing Uploaded Data Using AWS Step Functions

1.5 hours
  • 4 Learning Objectives

About this Hands-on Lab

AWS provides Step Functions as a way to help manage the flow of information through a pipeline of steps. This includes calling services such as Lambda, Glue, Athena, and DynamoDB, as well as performing some basic decisions and waiting for things to complete. Step Functions allow you to move state information in between steps and act on the state. In this lab, we’ll build a serverless pipeline to translate audio to text, and sort the data based on keywords in the transcript.

### Prerequisites

This is an advanced lab and is designed to challenge you, but it will reward you with a lot of valuable experience in a few parts of AWS. While solution code is available, as well as a full walkthrough, you should attempt to solve the challenges on your own. This lab is focused on helping you to learn Step Functions, so some resources and the more complex logic code are provided for you already in the lab.

To get the most out of this lab, you should be familiar with the following:
* IAM roles
* S3 buckets
* S3 events
* Lambda functions
* Python

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Prepare to Launch the Step Function

While some resources have been provided, you will need to complete the following steps to finish configuring the environment.

  1. Create an IAM role to allow Step Functions to start Lambda functions.
  2. Create a Step Function. Use a default configuration for now, which will be properly set up later. You need the ARN of the Step Function.
  3. In the run-step-function-lambda, set the STATEMACHINEARN environment variable to the ARN of the Step Function you just created.
  4. Create an S3 Event Notification that will call the run-step-function-lambda.
  5. Restrict the notification to mp3 data created in the upload folder.
Create the Step Function Flow

Implement the following logic as a Step Function flow. Refer to the lab instructions to find resources already provided that will do some of these things for you. You can also reference the lab diagram for an example architecture.

  1. Create a Transcribe job to translate the audio uploaded to S3 into text.
  2. The Transcribe job can take a few minutes to run, so wait for it to complete.
  3. After 30 seconds, check the status of the Transcribe job to see if it has completed.
  4. If the Transcribe job has not completed, wait another 30 seconds for it to complete.
  5. If the Transcribe job has failed, stop the pipeline with an error.
  6. Once the Transcribe job has completed successfully, move the audio and transcript into categories based on the presence of keywords in the transcript.
Create the Lambda Business Logic

The Lambda functions that are used in the Step Function flow have been partially written, but there is a lot still to do. Complete all of the TODO lines.

These functions are using the IAM role transcribe-audio-lambda-role, which gives them access to S3, Transcribe, and CloudWatch.

  1. Finish writing transcribe-audio-lambda.
    • Get the state from the function’s input.
    • Construct the parameters needed for the Transcribe job.
    • Add the transcript’s file key and the Transcribe job name into the state.
  2. Finish writing transcribe-status-lambda.
    • Retrieve the Transcribe job name from the state.
    • Add the job’s status into the state.
  3. Finish writing categorize-data-lambda.
    • Retrieve the state information. You’ve seen it done twice. Do you remember how to now?
    • Extract pieces of the state for further processing.
    • Determine the output location. Read the preceding code blocks.
    • Move the files to their proper destinations.
    • Note that this function relies on the environment variable KEYWORDS, which is a comma-separated list of words that are sensitive. If you are using the provided audio file, you can set this to important if you want the audio to match, or boring if you don’t want it to match.

Helpful Resources:

Categorize Audio Data

With the previous steps complete, our serverless app is ready to do work! Let’s test it out.

  1. Create an upload folder in the provided S3 bucket.
  2. Upload a test audio file to the upload folder.
  3. View the Input and Output of the steps in the Step Function as the state machine executes.
  4. Once your Step Function has completed, view the upload folder of the S3 bucket to confirm the file is no longer there.
  5. View the folder you categorized your audio into.

Additional Resources

Scenario

Our company has recently started recording meetings to improve transparency throughout the organization. With the quantity of meetings we have (which is always way too much), we generate a lot of hours of audio every day. It's nice to have these recordings, but it'd be way more useful to be able to search through them as text to avoid a lot of manual work to listen back through the data. This sounds like a job for machine learning! The one hiccup is that since every meeting is recorded, meetings about secret projects are also being recorded. Those need to be moved off to the side for stricter access control.

Our task is to build a pipeline that can take in all of this audio and translate it to text. We then need to check the transcript to see if it contains any of the secret project names. Normal audio gets put into the standard "processed" folder, but audio containing secret projects needs to be put in the "important" folder.

Resources

The full code for the Lambda functions and a test audio file can be found on GitHub. The Step Function configuration is also provided, but the Lambda resource ARNs will need to be set for your environment.

Some AWS resources have been provided with this lab:

  1. Two IAM roles
    • transcribe-audio-lambda-role, which is used by the Lambda functions to interact with Transcribe
    • trigger-stp-functions-role, which is used by a Lambda function to start the Step Functions workflow.
  2. An S3 bucket named meeting-audio with some characters for uniqueness
  3. One Lambda function that will handle S3 events. This code is fully provided in the lab, but will still need configured.
    • run-step-functions-lambda, which will be used to start the Step Functions.
  4. Three Lambda functions that will be used as part of the Step Function pipeline. These are provided as stubs so you can get practice with the APIs. Full code solutions are available on GitHub.
    • transcribe-audio-lambda, which will start a Transcribe job with uploaded audio files
    • transcribe-status-lambda, which will check the status of the Transcribe job
    • categorize-data-lambda, which will move the data based on the transcript

You will need to create an IAM role for Step Functions. The S3 bucket and Lambda functions will also need further configuration.

Lab Goals

  1. Prepare to Launch the Step Function
  2. Create the Step Function Flow
  3. Create the Lambda Business Logic
  4. Categorize Audio Data

Logging in to the Lab Environment

To avoid issues with the lab, use a new Incognito or Private browser window to log in to the lab. This ensures that your personal account credentials, which may be active in your main window, are not used for the lab.

Log in to the AWS console using the account credientials provided with the lab. Please make sure you are in the us-east-1 (N. Virginia) region when in the AWS console.

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?