Who doesn’t love a CI/CD pipeline? Especially if you’re wrangling infrastructure on AWS. The question for us today: can we use AWS’s own CodePipeline service to manage frequent deployments of AWS infrastructure?
Well, first… Do you know what continuous integration and continuous deployment pipelines are? If you do, great. Skip over the italicized part below. If you don’t, keep reading.
Continuous integration and continuous deployment (CI/CD) pipelines serve to automate software delivery steps including the build, test, and deployment stages. Yes, this is a safe space. Break out into your happy dance without fear of judgment.
The continuous integration (CI) stage includes the build and test phases. CI ensures that the change artifact that is built is also safe to deploy. Now is a good time to note that CD may also mean continuous delivery if the deployment phase is not completely automated. In this tutorial, CD means continuous deployment, so all changes are deployed to a production environment. Nervous? Try deploying to a staging environment before a production environment.
All right, here we go. In this blog post I will walk you through the steps I took while creating a CI/CD pipeline for the Cloud Resume Challenge. Here is my GitHub repository for the backend resources for the Cloud Resume Challenge. If you just want to follow along with this example, here is my GitHub repository for this tutorial.
The very first step is to have an AWS Account. It’s free to create, and for the first 12 months you get free access to many resources! (Read more about the AWS Free Tier.)
You have an AWS account. You’re rolling. Let’s get to the… fun parts. In the AWS console, navigate to CodePipeline. Choose Create pipeline. At this point in creating the pipeline, I felt confident. This was my very last step. This’ll take about an hour, I thought. Yeah, right. More like five days (and I was really putting time in). I don’t want it to take anyone else that long. This post is written solely to help readers complete this process much quicker. Back to you, tutorial.
- Choose a pipeline name and create a new service role. Keep in mind that the service role will probably need to be edited later. (Seriously. Remember this. Permissions were a big hindrance for me).
Next. Configure the source stage. I used AWS CodeCommit which requires a repository in CodeCommit. Before we create our CodeCommit repository, we have two steps to complete.
- We need an S3 bucket. The S3 bucket is used to store artifacts. I used the AWS CLI to create a bucket. See below for the command.
- Create a CloudFormation role. In this example, I created a role with CloudFormation as the trusted entity and I granted the AWSLambdaExecute permissions. I then attached the inline policy shown below to the role using the JSON tab. Depending on your SAM application, you may require different permissions.
Let’s set up our CodeCommit repository.
- Open CodeCommit in a new tab in the AWS Console.
- Create a repository to hold your artifacts.
- Clone the repository to your local machine.
- Upload all required files to the local folder, then push them to the repository. My repository includes files required for an AWS SAM application.
- Head back to CodePipeline in the AWS console and configure the source stage settings. This establishes our source control service (CodeCommit) as the first step in our deployment pipeline.
- Create the build stage. This is where we’ll package up our application for deployment.
Next. We have to create a project.
- Choose Create project and choose a project name. My next steps are shown below.
- Take note of the role name. The pipeline will fail once it is first deployed because the role will need to be updated.
Next. It is time to configure the deploy stage. We’re close. I promise.
- I am using the lovely CloudFormation for the deploy stage. Remember that AWS SAM is just a superset of CloudFormation, so you can deploy SAM apps natively via CodePipeline! Just one cool advantage of deep integration with AWS services.
- Use the CloudFormation role created earlier for the role in the deploy stage.
Next. Review the pipeline and create the pipeline.
Remember that the changes will fail because there are additional permissions required.
- Attach the AmazonS3FullAccess policy to the pipeline role we created earlier.
- Retry the deployment.
Now, at this point all three stages should show Success. Now all we need to do is add an execute change stage. That’s it! At least for the pipeline.
- Edit the pipeline, then edit the deploy stage.
- Choose Add action group at the bottom.
- Choose Done. Then Save. Then Release change.
- Now we want to automate API testing. Download Postman and Newman. You may be wondering if this process ever ends. (Spoiler: it does)
- Copy the API url and paste it in Postman. Click + to open a blank version of the page below.
- Include the specific test(s) you require. Use the snippets to the right to help.
- Press Save. Click collections and export the collection to the same location that holds your other files for the SAM application.
- Update the buildspec.yml file and then run the newman command inside the terminal to ensure it works.
- Push the files to the CodeCommit repository. You can then view your CodeBuild logs to see the status of the pipeline.
WHAT IT DO BABYYYY!!! We are done! I hope I have helped! If you enjoyed this tutorial please follow me on Twitter or connect with me on LinkedIn. I’ll have more tutorials in the coming future for things AWS related. Also, please feel free to ask me any questions! I am more than happy to help if I can.
Brooke Mitchell is an A Cloud Guru learner and cloud developer.