How to Host a Static Website on AWS with S3 and CloudFront Using Terraform
To host a static website on AWS, create a private S3 bucket for storage, a CloudFront distribution for global CDN delivery with HTTPS, an ACM certificate in us-east-1 for free SSL, and a Route 53 alias record for your custom domain. This setup costs under $1/month and handles SPA routing for React, Vue, and Angular apps.
You have built a React app, a portfolio site, or a landing page. Now you want to put it on your own domain with a proper SSL certificate. You could use Vercel or Netlify — they make it easy. But if you want full control over your infrastructure, or you are already running other things on AWS, hosting it yourself with S3 and CloudFront is the better move.
I host my own portfolio site this way. It costs me a few cents per month, it loads fast because CloudFront has edge locations all over the world, and I control every aspect of the setup. This is the same architecture I have used to host client-facing static assets in production.
What You Will Build
- S3 bucket — private storage for your website files. No public access.
- CloudFront distribution — CDN that serves your site from edge locations worldwide. Handles SSL.
- ACM certificate — free SSL certificate from AWS. Auto-renews.
- Route 53 record — connects your custom domain to CloudFront.
- SPA routing — so React, Vue, or Angular apps do not break when someone refreshes on a deep link.
Why S3 + CloudFront
Cost: A typical static site costs under $1 per month. S3 storage is pennies, and CloudFront's free tier covers 1 TB of data transfer.
Speed: CloudFront has 450+ edge locations. Your site loads fast whether the visitor is in Mumbai, New York, or Tokyo.
Security: Your S3 bucket stays completely private. CloudFront is the only thing that can read from it. No public S3 URLs floating around.
Prerequisites
- An AWS account
- Terraform 1.5 or later
- A domain name managed in Route 53 (or you can point your domain's nameservers to Route 53)
- Your static site files ready to upload (the build output from React, Vue, Next.js, etc.)
Step 1: Create a Private S3 Bucket
The old approach was to enable S3 static website hosting and make the bucket public. Do not do that. It does not support HTTPS with custom domains, and having a public bucket is a security risk.
Instead, create a private bucket and let CloudFront handle everything:
resource "aws_s3_bucket" "site" {
bucket = var.domain_name
tags = {
Name = var.domain_name
}
}
resource "aws_s3_bucket_public_access_block" "site" {
bucket = aws_s3_bucket.site.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
Every single public access setting is blocked. The bucket is completely private.
Step 2: Create CloudFront Origin Access Control
OAC is how CloudFront authenticates with your private S3 bucket. It replaced the older OAI (Origin Access Identity) approach.
resource "aws_cloudfront_origin_access_control" "site" {
name = "${var.domain_name}-oac"
description = "OAC for ${var.domain_name}"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
The signing_behavior = "always" means every request from CloudFront to S3 is signed. S3 verifies the signature and only serves the content if it matches.
Step 3: Create the CloudFront Distribution
This is the core of the setup. The distribution configuration handles caching, SSL, and the SPA routing trick:
resource "aws_cloudfront_distribution" "site" {
enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"
aliases = [var.domain_name]
price_class = "PriceClass_All"
origin {
domain_name = aws_s3_bucket.site.bucket_regional_domain_name
origin_id = "S3-${var.domain_name}"
origin_access_control_id = aws_cloudfront_origin_access_control.site.id
}
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-${var.domain_name}"
viewer_protocol_policy = "redirect-to-https"
compress = true
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
}
# SPA routing — return index.html for 403 and 404 errors
custom_error_response {
error_code = 403
response_code = 200
response_page_path = "/index.html"
}
custom_error_response {
error_code = 404
response_code = 200
response_page_path = "/index.html"
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate.site.arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
tags = {
Name = var.domain_name
}
}
The viewer_protocol_policy = "redirect-to-https" ensures every visitor gets redirected to HTTPS. The compress = true enables gzip compression automatically.
Step 4: Set Up S3 Bucket Policy
Allow only CloudFront to read from your S3 bucket:
resource "aws_s3_bucket_policy" "site" {
bucket = aws_s3_bucket.site.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "AllowCloudFrontServicePrincipal"
Effect = "Allow"
Principal = {
Service = "cloudfront.amazonaws.com"
}
Action = "s3:GetObject"
Resource = "${aws_s3_bucket.site.arn}/*"
Condition = {
StringEquals = {
"AWS:SourceArn" = aws_cloudfront_distribution.site.arn
}
}
}
]
})
}
The condition ensures that only YOUR CloudFront distribution can read from this bucket — not any CloudFront distribution.
Step 5: Request an SSL Certificate with ACM
This is the one that trips up most people: the ACM certificate MUST be in us-east-1. CloudFront is a global service and only reads certificates from us-east-1. Even if your S3 bucket is in ap-south-1.
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
}
resource "aws_acm_certificate" "site" {
provider = aws.us_east_1
domain_name = var.domain_name
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "cert_validation" {
for_each = {
for dvo in aws_acm_certificate.site.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
zone_id = var.route53_zone_id
name = each.value.name
type = each.value.type
records = [each.value.record]
ttl = 60
}
resource "aws_acm_certificate_validation" "site" {
provider = aws.us_east_1
certificate_arn = aws_acm_certificate.site.arn
validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}
DNS validation is better than email validation because it auto-renews. Once the CNAME records are in place, ACM handles renewals forever.
Step 6: Connect Your Custom Domain
resource "aws_route53_record" "site" {
zone_id = var.route53_zone_id
name = var.domain_name
type = "A"
alias {
name = aws_cloudfront_distribution.site.domain_name
zone_id = aws_cloudfront_distribution.site.hosted_zone_id
evaluate_target_health = false
}
}
An alias record is free (no Route 53 query charges) and works at the zone apex — so both akshayghalme.com and www.akshayghalme.com can point to CloudFront.
Step 7: Deploy Your Site
# Build your site
npm run build
# Upload to S3
aws s3 sync ./build s3://your-domain.com --delete
# Invalidate CloudFront cache
aws cloudfront create-invalidation \
--distribution-id YOUR_DISTRIBUTION_ID \
--paths "/*"
The --delete flag removes files from S3 that no longer exist in your build folder. The invalidation tells CloudFront to fetch fresh content from S3.
Why React Apps Break on Refresh
When you navigate to /about in a React app, React Router handles it on the client side. But if you hit refresh, the browser sends a request for /about to CloudFront. CloudFront looks for a file called /about in S3. That file does not exist — your app is all in index.html.
The custom error responses in Step 3 fix this. When CloudFront gets a 403 or 404 from S3, it returns index.html with a 200 status code instead. React Router then takes over and renders the correct page.
Common Mistakes to Avoid
- Requesting the ACM certificate in the wrong region. It must be us-east-1 for CloudFront. Every other region will result in CloudFront not being able to find your certificate.
- Making the S3 bucket public. There is no reason for this. CloudFront handles all requests. A public bucket is a security liability and an S3 cost liability — anyone can download your files directly.
- Forgetting SPA routing. Without the custom error responses, every deep link in your React or Vue app will 404 on page refresh.
- Not invalidating the cache. After uploading new files, your old content will keep being served until the cache TTL expires. Always create an invalidation after deployments.
- Using OAI instead of OAC. Origin Access Identity is the legacy approach. AWS recommends OAC for new setups. OAC supports more features and is simpler to configure.
Frequently Asked Questions
How much does it cost to host a static website on S3 and CloudFront?
For a typical personal website or portfolio, you are looking at a few cents per month. S3 storage is about $0.023 per GB, and CloudFront's free tier includes 1 TB of data transfer. Most static sites cost well under $1 per month. The SSL certificate from ACM is completely free.
Why do I need CloudFront if S3 can host websites directly?
S3 website hosting does not support HTTPS with custom domains. CloudFront gives you SSL, global edge caching for faster load times worldwide, and better security since your S3 bucket stays completely private.
Why does my React app show a 403 error when I refresh on a deep route?
When you refresh on /about, CloudFront looks for a file called /about in S3, which does not exist. You need custom error responses in CloudFront that return index.html with a 200 status code for 403 and 404 errors.
Why does the ACM certificate need to be in us-east-1?
CloudFront is a global service and only reads SSL certificates from the us-east-1 region. This is an AWS requirement that applies regardless of where your S3 bucket is located.
How do I update my website after the initial deployment?
Upload the new files to S3 using aws s3 sync, then create a CloudFront invalidation with aws cloudfront create-invalidation --distribution-id YOUR_ID --paths '/*'. The invalidation takes a minute or two to propagate globally. You can automate this entire process with a CI/CD pipeline using GitHub Actions.
Skip the Manual Setup — Use the Terraform Module
Everything covered here — private S3 bucket, CloudFront with OAC, SSL certificate, Route 53 record, SPA routing — is packaged into a single Terraform module.
module "static_site" {
source = "github.com/akshayghalme/terraform-s3-cloudfront-site"
domain_name = "akshayghalme.com"
route53_zone_id = "Z1234567890"
}
Two variables. Run terraform apply. Your site is live with SSL, CDN, and SPA routing in about 15 minutes (most of that is waiting for CloudFront to deploy).