The individual machine learning services provided by AWS are incredibly powerful by themselves, but when used together, they are extraordinary. Chaining the services together can create truly magical experiences. However, the outputs and inputs of each of these services need to be coordinated because each service takes a varying amount of time based on the original input data. Step Functions are one way to keep track of all of these moving pieces.
In this lab, you will be modifying an existing pipeline to learn how to set up the coordination between the services. We will be using Lambda functions in the background to run the logic of our pipeline, but all of the Lambda functions are provided for you.
Successfully complete this lab by achieving the following learning objectives:
- Add Translation to the Pipeline
- Inspect the existing Step Functions pipeline.
- Once the audio transcription is available, translate the text to Spanish.
- Hint: Check out the provided Lambda functions.
- The translation must be complete before the audio can be made from it. Make sure this step completes before continuing the pipeline.
- Add Sentiment Analysis to the Pipeline
- Once the audio transcription is available, determine whether the audio content feels positive or negative.
- This sentiment will determine the output folder of our translated audio file, so make sure it is done before continuing the pipeline.
- Hint: The sentiment processing uses the original transcription text. It does not need the translated text.
- Convert the Translated Text to Audio
- Using the sentiment and translated text, convert the text to speech.
- Store the result in the output S3 bucket in a folder named for the sentiment it represents.
- Upload Audio and Watch the Magic
- Upload one of the audio files provided in the GitHub respository.
- Watch the Step Function pipeline as it is processing.
- Once complete, view the output S3 bucket. Did the audio get categorized correctly?
- Download the tranlated audio file and take a listen!