How to Set Up Nginx Reverse Proxy on EC2 with SSL Using Terraform
Nginx is the most popular reverse proxy for a reason — it handles SSL termination, security headers, rate limiting, and load distribution with minimal resource usage. In this guide, we’ll deploy a production Nginx reverse proxy on EC2 using Terraform, configure free SSL from Let’s Encrypt, and harden it for real traffic.
Why Use Nginx as a Reverse Proxy
- SSL termination — handle HTTPS at the proxy layer so your backend runs plain HTTP
- Security headers — add HSTS, X-Frame-Options, CSP, and others in one place
- Rate limiting — protect your backend from abuse without application changes
- Load distribution — route traffic to multiple backend instances
- Gzip compression — reduce bandwidth and improve response times
- Static file serving — serve assets directly without hitting your application server
Infrastructure with Terraform
Security Group
resource "aws_security_group" "nginx" {
name = "nginx-proxy-sg"
description = "Security group for Nginx reverse proxy"
vpc_id = var.vpc_id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.admin_cidr] # Restrict to your IP
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = { Name = "nginx-proxy-sg" }
}
EC2 Instance with User Data
resource "aws_instance" "nginx" {
ami = data.aws_ami.amazon_linux.id
instance_type = "t3.micro"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.nginx.id]
subnet_id = var.public_subnet_id
root_block_device {
volume_size = 20
volume_type = "gp3"
iops = 3000
throughput = 125
}
user_data = <<-EOF
#!/bin/bash
yum update -y
amazon-linux-extras install nginx1 -y
yum install certbot python3-certbot-nginx -y
systemctl enable nginx
systemctl start nginx
EOF
tags = { Name = "nginx-proxy" }
}
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["al2023-ami-*-x86_64"]
}
}
Elastic IP and Route 53
resource "aws_eip" "nginx" {
instance = aws_instance.nginx.id
domain = "vpc"
tags = { Name = "nginx-proxy-eip" }
}
resource "aws_route53_record" "nginx" {
zone_id = var.route53_zone_id
name = var.domain_name
type = "A"
ttl = 300
records = [aws_eip.nginx.public_ip]
}
Basic Reverse Proxy Configuration
SSH into the instance and create the Nginx config:
# /etc/nginx/conf.d/app.conf
server {
listen 80;
server_name example.com www.example.com;
location / {
proxy_pass http://127.0.0.1:3000; # Your backend
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
# Test and reload
sudo nginx -t
sudo systemctl reload nginx
SSL with Let’s Encrypt
Install Certbot and Obtain Certificate
# Install Certbot (if not done via user_data)
sudo yum install certbot python3-certbot-nginx -y
# Obtain certificate — Certbot auto-modifies your Nginx config
sudo certbot --nginx \
-d example.com \
-d www.example.com \
--non-interactive \
--agree-tos \
--email admin@example.com
Certbot automatically:
- Obtains the certificate from Let’s Encrypt
- Modifies your Nginx config to listen on 443 with SSL
- Adds a 301 redirect from HTTP to HTTPS
- Reloads Nginx
Auto-Renewal
# Test renewal
sudo certbot renew --dry-run
# Certbot installs a systemd timer automatically
# Verify it exists:
sudo systemctl status certbot-renew.timer
# Or add a cron job as backup:
echo "0 3 * * * root certbot renew --quiet --post-hook 'systemctl reload nginx'" | sudo tee /etc/cron.d/certbot-renew
Production Nginx Configuration
Here is a complete production nginx.conf with security hardening, gzip, rate limiting, and optimized settings:
# /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;
events {
worker_connections 1024;
multi_accept on;
use epoll;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
log_format main '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $upstream_response_time';
access_log /var/log/nginx/access.log main;
# Performance
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
client_max_body_size 50M;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 1000;
gzip_types text/plain text/css application/json
application/javascript text/xml application/xml
application/xml+rss text/javascript image/svg+xml;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=3r/s;
# Security headers (applied globally)
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Hide Nginx version
server_tokens off;
include /etc/nginx/conf.d/*.conf;
}
Production Server Block
# /etc/nginx/conf.d/app.conf
upstream backend {
server 127.0.0.1:3000;
keepalive 32;
}
# Redirect HTTP to HTTPS
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
# SSL (managed by Certbot)
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
# Rate limiting on API endpoints
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Stricter rate limit on login
location /api/auth/ {
limit_req zone=login burst=5 nodelay;
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Default proxy
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
# Health check endpoint
location /health {
access_log off;
return 200 'OK';
add_header Content-Type text/plain;
}
}
Testing SSL
After configuring SSL, verify your setup:
# Quick test
curl -I https://example.com
# Check certificate details
openssl s_client -connect example.com:443 -servername example.com < /dev/null 2>/dev/null | openssl x509 -noout -dates
For a thorough check, run your domain through SSL Labs at https://www.ssllabs.com/ssltest/. You should get an A or A+ rating with the configuration above.
Monitoring Nginx
Access Logs
Our log format includes request time and upstream response time, making it easy to find slow requests:
# Find slow requests (over 1 second)
awk '$NF > 1.0' /var/log/nginx/access.log
# Count requests by status code
awk '{print $9}' /var/log/nginx/access.log | sort | uniq -c | sort -rn
# Top 10 IPs by request count
awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -rn | head -10
Stub Status for Prometheus
# Add to nginx config inside the server block
location /nginx_status {
stub_status;
allow 127.0.0.1; # Only allow localhost
allow 10.0.0.0/8; # Allow VPC range for Prometheus
deny all;
}
Use the nginx-prometheus-exporter to scrape these metrics into Prometheus and build Grafana dashboards for active connections, request rates, and error rates.
Common Mistakes
- Not setting
proxy_set_header Host $host— your backend sees the internal IP instead of the actual domain, breaking virtual hosts and redirects - Missing
X-Forwarded-For— your backend sees Nginx’s IP instead of the client’s real IP - Forgetting to open port 80 for Certbot — Let’s Encrypt validation needs HTTP access even after you switch to HTTPS
- No rate limiting — a single bad actor can overwhelm your backend
- Using default
worker_connectionswithout tuning — default 512 is too low for production - Not setting
client_max_body_size— default 1MB causes file upload failures - Running Nginx as root — always run worker processes as the
nginxuser
Frequently Asked Questions
Should I use Nginx or ALB as a reverse proxy?
Use ALB when you have multiple targets behind auto-scaling groups, need path-based routing across microservices, or want managed SSL. Use Nginx on EC2 when you have a single backend, need custom config (Lua, advanced rate limiting), or want to minimize cost. Nginx on a t3.micro costs ~$8/month vs ALB at ~$22/month minimum.
How does Let’s Encrypt auto-renewal work?
Certbot installs a systemd timer (or cron job) that runs twice daily. It only renews certificates within 30 days of expiry. Test with sudo certbot renew --dry-run. Certificates are valid for 90 days.
Nginx vs Apache for reverse proxy?
Nginx uses an event-driven architecture — handles thousands of connections with minimal memory. Apache uses process/thread per connection, consuming more memory. For reverse proxy, Nginx is the standard choice.
How much does this setup cost?
EC2 t3.micro: ~$7.50/month (or free tier). Elastic IP: free when attached to a running instance. Domain: $10-15/year. Let’s Encrypt: free. Total: under $10/month.
What is the max number of connections Nginx can handle?
Nginx can handle 10,000+ concurrent connections on a t3.micro. The bottleneck is usually your backend, not Nginx. Tune worker_connections (1024 default) and set worker_processes auto to match CPU cores.