What
If you haven’t checked out the first part of this walkthrough, take a look at the link below.
In this article we’re going to walkthrough the process of setting up AWS CodePipeline to automatically deploy our code from GitHub to our existing Docker environment on Elastic Beanstalk.
We’re going to skip the build process and address this in hopefully another walkthrough, but the main focus of this is just to automate our deployments.
Why
If you haven’t guessed, the main benefit is so that we’re not deploying things manually every time we have a new feature pushed to our GitHub repository. Elastic Beanstalk also comes with the added benefit of continuous deployments so that our app will always be running, and when new code is deployed, there won’t be any interruptions.
Because manually deploying should be deprecated
We also have some other added benefits in CodePipeline that we can control later, like building a Docker image, storing it in Registry, running tests, and even possibly notifications on deployments.
Requirements
- GitHub Repository (Existing Code Repo)
- Existing AWS Docker Elastic Beanstalk Instance
Configuring Code Pipeline
The next few steps will be create a new CodePipeline, associating an existing GitHub repository, and linking it with an existing Elastic Beanstalk instance.
The first step is to make your way to CodePipeline.
While within the CodePipeline dashboard, click on Create pipeline in the top right.
This will start a wizard or series of steps to setup our CodePipeline.
1. Choose pipeline setting
This first step is just to configure the name, permissions, and storage settings for the pipeline.
Give you Pipeline name any name you’d like but probably something that is similar the name of the app being deployed.
Select New service role, if you don’t already have one, which will automatically generate a Role name and will give permission to perform all the actions needed to deploy our instance.
For the advanced settings, you can stick to the defaults for the Artifact store and the Encryption keys.
To give context, the Artifact store is the storage location for each of the outputs for each step of the pipeline. If one step makes a modification, it stores that output of that step in S3, which can then be used for the next step.
When done, click Next.
2. Add source stage
Step 2 is meant to associate the code repository the pipeline will be pulling from and how it will detect changes to trigger the pipeline.
For the Source provider we’ll use GitHub.
When selected it will display a button to Connect, click and give AWS permission the GitHub repository.
Once connected, select the Repository by searching for its name.
The main branch we’ll use to trigger our pipeline will be master
as a common practice.
For the Change detection options, we’ll select GitHub webhooks, so whenever we merge code to our master
branch or push directly to master
, it will trigger the pipeline.
Click Next.
3. Add build stage
This step will be quick, because we’re going to Skip build stage, because we aren’t going to be taking advantage of this in this walkthrough.
Click Skip build stage.
4. Deploy Stage
This step is to decide where our code is going to be deployed, or what AWS service we’re going to use to get our code to work on.
In our case, select AWS Elastic Beanstalk for our Deploy provider.
Select the Region that your Elastic Beanstalk is deployed under.
Search and select the Application name under that region.
⚠️Note: If you don’t see your application name, double check that you are the correct region in the top right your AWS Console. If you aren’t you will need to select that region and perhaps start this process again from the beginning.
Search and select the Environment name from your application.
Click Next.
5. Review
This step is just to review the entire settings of our pipeline to confirm before creating.
Click Create pipeline.
Pipeline Initiated
After you create the pipeline, the process will automatically pull from the GitHub repository and then deploy it directly to the Elastic Beanstalk.
Confirming Deployment
If we make our way to our Elastic Beanstalk app, we should see the service starting to deploy and then transition to deployed successfully.
If we go to the app url, we should see something similar to our first deployment from the previous walkthrough.
See Deployment In Action
This next part is to make a change to our GitHub repository and see the change automatically deployed.
Code Change
You can use your own repository, but for this part we’ll be utilizing this repository:
The change may already be in the repository, but the new line we’re adding is to show the environment name in the endpoint:
const ENVIRONMENT = process.env.NODE_ENV || 'development';...app.get('/', (_req, res) => res.send({ version: VERSION, environment: ENVIRONMENT }));
Make the changes and either commit and push directly to master
or create a new pull request and then merge that request to the master
branch.
Once pushed or merged, you can take a look at the CodePipeline automatically pull and deploy this new code.
What’s Next
The next step would be to introduce and automate a build process, pass in additional environment variables, introduce logging, add ssh access, and perhaps notifications.
If you haven’t read the first part of this article, check out the following:
If you got value from this, please share it on twitter 🐦 or other social media platforms. Thanks again for reading. 🙏
Please also follow me on twitter: @codingwithmanny and instagram at @codingwithmanny.