Deploy a React to AWS S3 with a Github Actions Deployment pipeline
Introduction
This article explains the key parts of a complete infrastructure setup and deployment pipeline for a barebones React app. The setup has the following features:
- Hosted in AWS S3
- Fast content delivery using AWS Cloudfront CDN (Content Delivery Network)
- SSL/HTTPS certificates issued by AWS
- All infrastructure provisioned as code using Terraform
- A Github Actions deployment pipeline
If you simply want to see the complete code example, you can find that here.
Pre-requisites
Before getting started make sure that you have the following ready-to-go:
-
An AWS account
-
Two Users in your AWS account with the following permissions and programmatic access (an Access Key ID and a Secret Access Key)
-
A Terraform User with the following AWS Policies:
- AmazonS3FullAccess
- CloudFrontFullAccess
- IAMFullAccess
- AWSCertificateManagerFullAccess
-
A Github Deployment pipeline User with the following AWS Policies:
- AmazonS3FullAccess
- CloudFrontFullAccess
-
Terraform installed and configured for use with your AWS account (Installation instructions)
-
A domain name
Understanding AWS and Terraform
In AWS there are 3 parts to set up:
- S3 bucket - the static files for the React app will be stored here. AWS can serve these static files for us over the web straight from the S3 bucket.
- Cloudfront - a CDN to push the static files to AWS edge locations. This pushes the code as close to end-users as possible.
- AWS certificate - AWS will issue a certificate for our domain and will send traffic to the Cloudfront distribution.
These will be set up using Terraform. Terraform lets you define the "state of the world" that you want the infrastructure to be in. Terraform then works out what changes it needs to make to set the infrastructure in the correct state.
Terraform can be a confusing concept at first, but there are 3 concepts to keep in mind while working in Terraform.
-
Resources
A Terraform resource describes a piece of infrastructure in AWS. A resource could be something like an S3 bucket, an EC2 instance. What's important is that a resource is where those objects are defined and created.
-
Data
Data refers to existing pieces of infrastructure already running in the infrastructure. This means that each data object will have a corresponding resource object somewhere else (provided that it was created through Terraform). If you go into the AWS Console and manually create an EC2 instance without using Terraform, then you can reference that EC2 instance through the Terraform data object.
-
Variables
Variables are pieces of data (strings or numbers) that we wish to share between Terraform code.
Provisioning the Infrastructure using Terraform
Setup
Create a new file called variables.tf
and add the following.
variable "bucket_name" {
default = "react-aws-terraform-github-actions"
description = "The name of the bucket"
}
variable "aws_region" {
type = string
default = "eu-west-1"
}
Replace the bucket_name
and aws_region
as required.
Initialize Terraform
Create a new file called main.tf
. For now all Terraform code will be written inside this file. At a later point it can be split out into separate files.
provider "aws" {
region = var.aws_region
}
provider "aws" {
region = "us-east-1"
alias = "use1"
}
locals {
domain = "react-aws-terraform-github-actions.andyjones.co"
s3_origin_id = "s3-react-aws-terraform-github-actions"
}
The domain
should be the domain the application should be run on and s3_origin_id
can be anything. Two providers are created. The second provider is configured to use us-east-1
since SSL certificates can only be created in this region.
In the same folder as the main.tf
file, run terraform init
to initialize a new Terraform project.
S3 bucket
Create the S3 bucket and attach the s3-website-policy. Notice that this policy already exists in AWS so it's referenced to using the data
declaration.
data "aws_iam_policy_document" "s3-website-policy" {
statement {
actions = [
"s3:GetObject"
]
principals {
identifiers = ["*"]
type = "AWS"
}
resources = [
"arn:aws:s3:::${var.bucket_name}/*"
]
}
}
resource "aws_s3_bucket" "react-aws-terraform-github-actions-s3-bucket" {
bucket = var.bucket_name
acl = "public-read"
policy = data.aws_iam_policy_document.s3-website-policy.json
website {
index_document = "index.html"
error_document = "index.html"
}
}
resource "aws_s3_bucket_public_access_block" "react-aws-terraform-github-actions-s3-access-control" {
bucket = aws_s3_bucket.react-aws-terraform-github-actions-s3-bucket.id
block_public_acls = true
ignore_public_acls = true
}
This creates the S3 bucket and configures the public access rules for that bucket. Test out the configuration by running terraform plan
to see the planned changes:
Plan: 2 to add, 0 to change, 0 to destroy
Implement the changes to AWS using terraform apply
. Type yes to confirm the changes.
SLL Certificate
Create the SSL certificate for your domain. Notice that the use1
provider is used.
resource "aws_acm_certificate" "react-aws-terraform-github-actions-cert" {
provider = aws.use1
domain_name = local.domain
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
Once again run terraform plan
to see the planned changes:
Plan: 1 to add, 0 to change, 0 to destroy
Run terraform apply
to create the certificate.
AWS will require you to confirm the certificate manually before it's ready for use. In order to confirm the certificate log into the AWS console and from the us-east-1
region, navigate to "Certificate Manager". The certificate for your domain will be listed there. Follow the instructions to validate the domain. This will usually require adding a specific CNAME record to the domain's DNS records. It can take a few minutes for this to verify.
Cloudfront distribution
The final piece of infrastructure to provision is the CDN. Use the following Terraform to create the Cloudfront distribution.
resource "aws_cloudfront_distribution" "react-aws-terraform-github-actions" {
enabled = true
is_ipv6_enabled = true
comment = "The cloudfront distribution for react-aws-terraform-github-actions.andyjones.co"
default_root_object = "index.html"
aliases = [local.domain]
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
viewer_protocol_policy = "redirect-to-https"
forwarded_values {
query_string = false
cookies {
forward = "all"
}
}
}
origin {
domain_name = aws_s3_bucket.react-aws-terraform-github-actions-s3-bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.react-aws-terraform-github-actions-cert.arn
ssl_support_method = "sni-only"
}
custom_error_response {
error_code = 404
error_caching_min_ttl = 86400
response_page_path = "/index.html"
response_code = 200
}
}
This creates a CDN which points to the S3 bucket and uses the certificate. Notice in the custom_error_response
section that all 404 responses are translated into HTTP 200 with the body of /index.html
. This means that if the app is using client-side routing (such as React Router) then the app will still render and initialize the application in the browser. For example a the S3 bucket will return an HTTP 404 for a request sent to https://yourdomain.com/path-that-does-not-exist. The CDN maps that 404 into a 200 and returns the index.html
file in the response body.
Run terraform plan
to see the planned changes:
Plan: 1 to add, 0 to change, 0 to destroy.
If all is good then apply the changes with terraform apply
.
The last step is to create a CNAME DNS record for the domain which points to the Cloudfront domain name. The Cloudfront domain name can be found from the AWS Console.
Understanding the Github Actions deployment pipeline
The Github Actions pipeline utilizes the fantastic set of Actions on the Github Marketplace.
The pipeline has two stages:
Stage One - Build
- Clone the repository
- Set/Get node_modules cache
- Build static files
- Upload the built static files as an artifact for use by Stage Two
Stage Two - Deploy
- Download the static file bundle from Stage One
- Login to the AWS CLI
- Push the static file bundle to S3 bucket
- Invalidate the
index.html
file in the Cloudfront distribution. This will force all edge servers to re-fetch the latestindex.html
from S3.
Create IAM Policy to Allow Github Actions to invalidate paths in Cloudfront
In order to invalidate the index.html
file in Cloudfront a specific policy is required. Create this using Terraform.
resource "aws_iam_policy" "cloudfront-invalidate-paths" {
name = "cloudfront-invalidate-paths"
description = "Used by CI pipelines to delete cached paths"
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Sid = "VisualEditor0",
Effect = "Allow",
Action = "cloudfront:CreateInvalidation",
Resource = "*"
}
]
})
}
Run terraform apply
to create the policy and from the AWS IAM console assign this policy to the Github deployment pipeline User from the AWS IAM Console.
Create the Github action
In the same location as the React app, create the Github action yaml file.
mkdir .github/ && mkdir .github/workflows && touch .github/workflows/main.yml
and create the Steps for the Github actions:
name: Deploy Production
on: [push]
env:
AWS_REGION: eu-west-1
jobs:
build:
runs-on: ubuntu-latest
steps:
# Clone the repo
- name: Clone repository
uses: actions/checkout@v1
# Cache node modules
- name: Cache node modules
uses: actions/cache@v1
with:
path: node_modules
key: yarn-deps-${{ hashFiles('yarn.lock') }}
restore-keys: |
yarn-deps-${{ hashFiles('yarn.lock') }}
# Build the static site
- name: Create static build
run: yarn install && yarn build
# Upload the artifact for other stages to use
- name: Share artifact in github workflow
uses: actions/upload-artifact@v1
with:
name: build
path: build
deploy:
runs-on: ubuntu-latest
needs: build
steps:
# Download the build artifact
- name: Get build artifact
uses: actions/download-artifact@v1
with:
name: build
# Setup the AWS credentials
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-1
# Copy the files from /build to s3 bucket
- name: Deploy static site to S3 bucket
run: aws s3 sync . s3://${{ secrets.S3_BUCKET_NAME }} --delete
working-directory: build
# Invalidate index file in Cloudfront (this forces edges to fetch the latest index.html)
- name: invalidate
uses: chetan/invalidate-cloudfront-action@master
env:
DISTRIBUTION: ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }}
PATHS: "/index.html"
AWS_REGION: $AWS_REGION
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
Create the following Github Secrets:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME
CLOUDFRONT_DISTRIBUTION_ID
where the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
are the programmatic access keys for the Github Deployment User in AWS.
The CLOUDFRONT_DISTRIBUTION_ID
can be found in the AWS Console.
Update the AWS_REGION
env variable in the main.yml
file to the correct region.
Conclusion
Everything should now be setup. Any time the Github repository is pushed to the Build and Deploy pipeline will be triggered.
What's great about this setup is that its easy to replicate again for any static site and the AWS costs are incredibly low.
If you would like to suggest any improvements then feel free to send me a message on Twitter or open an Issue/PR in the example repo.