TERRAFORM

How to Configure Terraform Remote State with S3 and DynamoDB Locking

By Akshay Ghalme·March 31, 2026·10 min read

To configure Terraform remote state, create a versioned S3 bucket with encryption for state storage and a DynamoDB table with a LockID partition key for state locking. Then add a backend "s3" block to your Terraform configuration pointing to both resources. This prevents state corruption when multiple team members run Terraform simultaneously and keeps your state file secure and versioned.

By default, Terraform stores state in a local file called terraform.tfstate. This works when you are the only person working on the infrastructure. The moment a second person joins the project, or you start running Terraform from a CI/CD pipeline, local state becomes a problem.

Two people cannot safely run terraform apply against the same local file. There is no locking mechanism, no versioning, and if someone's laptop dies, the state file is gone. I have seen teams lose track of what infrastructure they even had because the state file lived on someone's machine who left the company.

Remote state with S3 and DynamoDB fixes all of this. The state lives in a central, versioned, encrypted bucket. DynamoDB ensures only one person can modify it at a time.

What You Will Build

  • S3 bucket — stores your state file with versioning (so you can roll back) and encryption (so credentials in state are protected)
  • DynamoDB table — provides state locking so two terraform apply runs cannot happen simultaneously
  • Backend configuration — tells Terraform to use S3 instead of local state
  • Per-environment isolation — separate state files for dev, staging, and production

Why Remote State Matters

Terraform state contains everything Terraform knows about your infrastructure — resource IDs, IP addresses, database passwords, and the relationships between resources. Without it, Terraform cannot plan changes or destroy resources safely.

Three problems with local state:

  1. No collaboration. Two people running terraform apply with different local state files will create duplicate resources or overwrite each other's changes.
  2. No locking. Even with shared state on a network drive, nothing prevents concurrent writes that corrupt the file.
  3. No protection. State files contain sensitive data in plain text. A local file on a developer's laptop is one stolen laptop away from a security incident.

Prerequisites

  • An AWS account
  • Terraform 1.5 or later
  • AWS CLI configured

Step 1: Create the S3 Bucket for State Storage

This bucket stores your Terraform state files. It needs versioning (so you can recover previous state), encryption (so secrets in state are protected), and public access blocked.

# backend-infra/main.tf — run this ONCE to create the backend infrastructure

provider "aws" {
  region = "ap-south-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "your-company-terraform-state"

  lifecycle {
    prevent_destroy = true
  }

  tags = {
    Name    = "Terraform State"
    Managed = "manual"
  }
}

resource "aws_s3_bucket_versioning" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "aws:kms"
    }
    bucket_key_enabled = true
  }
}

resource "aws_s3_bucket_public_access_block" "terraform_state" {
  bucket = aws_s3_bucket.terraform_state.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

The prevent_destroy = true lifecycle rule stops you from accidentally destroying the bucket that holds all your state. If someone runs terraform destroy on this configuration, Terraform will refuse.

Name your bucket something unique and identifiable. I use the pattern company-name-terraform-state. S3 bucket names are globally unique, so generic names like "terraform-state" are already taken.

Step 2: Create the DynamoDB Table for State Locking

The DynamoDB table provides a locking mechanism. When someone runs terraform apply, Terraform writes a lock entry to this table. If another person tries to run apply at the same time, Terraform sees the lock and waits.

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "terraform-state-locks"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }

  tags = {
    Name    = "Terraform State Locks"
    Managed = "manual"
  }
}

Two important details:

  • The hash key must be LockID — exactly that string, capital L and capital ID. Terraform expects this exact name. If you use "lock_id" or "lockId", locking silently fails.
  • PAY_PER_REQUEST billing — Terraform state locks happen a few times a day at most. Pay-per-request means you pay fractions of a cent instead of maintaining provisioned capacity.

Step 3: Configure the S3 Backend

Now in your actual infrastructure project (not the backend-infra project), add the backend configuration:

# main.tf — your infrastructure project

terraform {
  backend "s3" {
    bucket         = "your-company-terraform-state"
    key            = "prod/vpc/terraform.tfstate"
    region         = "ap-south-1"
    dynamodb_table = "terraform-state-locks"
    encrypt        = true
  }
}

provider "aws" {
  region = "ap-south-1"
}

The key is the path inside the S3 bucket where this specific state file lives. I use the pattern environment/component/terraform.tfstate — for example:

# Each project gets its own state file path
prod/vpc/terraform.tfstate
prod/rds/terraform.tfstate
prod/ecs/terraform.tfstate
staging/vpc/terraform.tfstate
staging/rds/terraform.tfstate
dev/vpc/terraform.tfstate

This means you can run terraform destroy on dev/vpc without any risk of touching production. Each state file is completely isolated.

Step 4: Migrate Local State to Remote

If you already have local state from previous terraform apply runs, you need to migrate it:

$ terraform init

Initializing the backend...
Do you want to copy existing state to the new backend?
  Enter a value: yes

Successfully configured the backend "s3"!

Terraform detects that you are switching from local to S3 and offers to copy your existing state. Say yes. After this, your local terraform.tfstate file is no longer used — you can delete it (but keep a backup just in case).

If this is a brand new project with no existing state, terraform init simply configures the backend without any migration.

Step 5: Set Up Per-Environment State Isolation

The simplest approach is to use different key values for each environment. But typing different keys manually is error-prone. A better approach is to use partial backend configuration with -backend-config:

# backend.hcl — shared backend config
bucket         = "your-company-terraform-state"
region         = "ap-south-1"
dynamodb_table = "terraform-state-locks"
encrypt        = true
# In your terraform block, only specify the key pattern
terraform {
  backend "s3" {
    key = "terraform.tfstate"  # Will be overridden
  }
}
# Initialize with environment-specific key
terraform init \
  -backend-config=backend.hcl \
  -backend-config="key=prod/my-app/terraform.tfstate"

# For staging
terraform init \
  -backend-config=backend.hcl \
  -backend-config="key=staging/my-app/terraform.tfstate"

This way the same Terraform code works for any environment. The state isolation happens at init time, not in the code. Your CI/CD pipeline can pass the right key based on the branch or environment.

What Happens During a Lock

Here is the exact sequence when someone runs terraform apply:

  1. Terraform writes a lock entry to DynamoDB with a unique ID, the user's info, and a timestamp
  2. If the write succeeds, Terraform proceeds with the plan and apply
  3. If another lock already exists, Terraform shows the error "Error acquiring the state lock" with details about who holds it
  4. After apply completes (or fails), Terraform removes the lock entry from DynamoDB
  5. The next person can now acquire the lock and run their changes

If Terraform crashes mid-apply (your laptop dies, your terminal closes, the CI runner times out), the lock stays in DynamoDB. This is by design — it prevents someone else from applying on top of a potentially corrupted state.

Fixing "Error Acquiring the State Lock"

This is the most common issue people hit with remote state. The error looks like:

Error: Error acquiring the state lock

Error message: ConditionalCheckFailedException: The conditional request failed
Lock Info:
  ID:        a1b2c3d4-e5f6-7890-abcd-ef1234567890
  Path:      your-company-terraform-state/prod/vpc/terraform.tfstate
  Operation: OperationTypeApply
  Who:       akshay@laptop
  Version:   1.5.7
  Created:   2026-03-31 10:15:23.456789 +0000 UTC

Before doing anything, check with your team. If "akshay@laptop" is currently running an apply, wait for them to finish. Only force-unlock if you are certain no one is running Terraform.

# Only if you are SURE no one is running Terraform
terraform force-unlock a1b2c3d4-e5f6-7890-abcd-ef1234567890
Force-unlocking while someone is mid-apply is one of the fastest ways to corrupt your state. Always verify first. A Slack message takes 10 seconds. Recovering from corrupted state takes hours.

State File Security

Your Terraform state file contains sensitive data — database passwords, API keys, private IPs, and resource ARNs. Treat it like a credential store:

  • Never commit state to git. Add *.tfstate and *.tfstate.backup to your .gitignore.
  • Encrypt at rest. The S3 backend configuration with encrypt = true handles this.
  • Restrict access. Only IAM roles that need to run Terraform should have access to the state bucket. Use a bucket policy to deny access to everyone else.
  • Enable versioning. If state gets corrupted, you can roll back to a previous version from S3 version history.

Common Mistakes to Avoid

  1. Using the same state file for all environments. A bad terraform destroy in dev should never risk touching production. Always use separate keys or separate buckets per environment.
  2. Forgetting to enable versioning. Without versioning, a corrupted state file is gone forever. With versioning, you go to S3, find the previous version, and restore it.
  3. Hardcoding the backend configuration. Use partial backend config with -backend-config files so the same code works across environments. Hardcoded backends lead to copy-paste errors.
  4. Wrong DynamoDB hash key name. It must be exactly LockID — capital L, capital I, capital D. Any other spelling silently disables locking without any error message.
  5. Not restricting bucket access. Everyone on the AWS account can read your state by default. State contains secrets. Lock down the bucket policy to only the roles that need it.

Frequently Asked Questions

What happens if two people run terraform apply at the same time?

With DynamoDB locking configured, the second person gets an "Error acquiring the state lock" message and has to wait. Without locking, both operations run simultaneously and corrupt the state file — Terraform loses track of what exists. Always use DynamoDB locking.

Why use S3 instead of Terraform Cloud?

S3 gives you full control at minimal cost — a few cents per month. Terraform Cloud costs $20 per user per month for teams. If you are already on AWS and just need reliable state storage with locking, S3 plus DynamoDB is simpler and cheaper.

How do I fix "Error acquiring the state lock"?

First, check if someone on your team is actually running Terraform. If not, the lock is stale from a crashed run. Use terraform force-unlock LOCK_ID with the ID from the error message. Only do this after confirming no one else is mid-apply.

Should I use one state file or separate per environment?

Always separate. Use different S3 keys like prod/vpc/terraform.tfstate and dev/vpc/terraform.tfstate. A mistake in dev should have zero risk of affecting production.

Do I create the S3 bucket and DynamoDB table manually?

The backend infrastructure must exist before Terraform can use it. Create them once using a separate Terraform config, the AWS CLI, or the console. It is a chicken-and-egg problem — Terraform cannot create the backend it needs to store its own state.


Using This with Your Infrastructure

Once your remote state is configured, every VPC, RDS database, and ECS service you build with Terraform stores its state safely in S3. Your team can collaborate without fear of corruption, and your CI/CD pipeline can run terraform apply knowing it will not collide with a developer's local run.

This is the foundation that every Terraform project needs before anything else. Set it up once, and every project you create from here benefits from it.

AG

Akshay Ghalme

AWS DevOps Engineer with 3+ years building production cloud infrastructure. AWS Certified Solutions Architect. Currently managing a multi-tenant SaaS platform serving 1000+ customers.

More Guides & Terraform Modules

Every guide comes with a matching open-source Terraform module you can deploy right away.