Goodbye DynamoDB: Terraform S3 Native State Locking Is Here and How to Migrate
Terraform 1.11 deprecates DynamoDB-based locking. Learn how S3 native state locking works and how to migrate safely without breaking your infrastructure.
For years, every Terraform tutorial told you the same thing: S3 for state, DynamoDB for locking. That advice is now officially deprecated. Terraform 1.11 introduced S3 native state locking using Amazon S3 Conditional Writes, eliminating the need for DynamoDB entirely. The community did not celebrate. Most teams did not notice. We did, and we helped migrate a dozen Terraform stacks from DynamoDB to S3. The results: simpler infrastructure, fewer IAM permissions, and faster deployments.
The Problem
We maintained a Terraform setup with S3 state storage backed by DynamoDB for locking. Two AWS resources to manage. Two sets of IAM permissions to grant. When DynamoDB was down, Terraform applies failed even though the S3 bucket was fine. We had to explain to new team members why a NoSQL database was managing file locks. Nobody understood until they studied the problem themselves. DynamoDB added complexity without clear benefit. Every new team member asked: why not just use S3? We had no good answer.
Why This Happened
S3 historically lacked native atomic conditional writes needed for state locking. DynamoDB was the only AWS service with atomic conditional put operations, making it the standard solution for years. AWS extended S3 with Conditional Writes in August 2024, enabling atomic operations at the S3 API level. Terraform 1.10 introduced S3 native locking as experimental. Terraform 1.11 made it Generally Available (GA). The community ignored it because Terraform works fine with DynamoDB. Migration seemed unnecessary until DynamoDB licensing costs spiked in late 2025.
The Solution
How S3 Native Locking Works
When you run terraform apply with S3 native locking, Terraform creates a .tfstate.tflock file in S3 using a conditional write (if-none-match header). Only one apply can succeed because the condition fails if the file already exists. After apply completes, Terraform deletes the lock file. If apply crashes, the lock file remains, but terraform force-unlock removes it manually.
For Fresh Setups (New Terraform Projects)
terraform {
required_version = "~> 1.11.0"
backend "s3" {
bucket = "skillzmist-terraform-state"
key = "prod/terraform.tfstate"
region = "us-east-1"
encrypt = true
use_lockfile = true # Enable S3 native locking
}
}
That is all. One line: use_lockfile = true. No DynamoDB table. No dynamodb_table parameter. Terraform handles everything.
Migration Strategy: Existing DynamoDB + S3 Setup
Phase 1: Enable S3 locking while keeping DynamoDB (Safe Testing)
# This config uses BOTH systems simultaneously.
# Terraform locks on S3, then also locks on DynamoDB.
# Safe because if S3 locking fails, DynamoDB still prevents concurrent applies.
# Good for testing S3 locking before removing DynamoDB.
terraform {
backend "s3" {
bucket = "skillzmist-terraform-state"
key = "prod/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-locks" # Keep this temporarily
use_lockfile = true # Add this
}
}
Deploy this config. Run terraform plan and terraform apply in staging. Verify everything works. Watch the logs and dashboards. After 1-2 weeks of successful operation, proceed to phase 2.
Phase 2: Remove DynamoDB (Production Cutover)
terraform {
backend "s3" {
bucket = "skillzmist-terraform-state"
key = "prod/terraform.tfstate"
region = "us-east-1"
encrypt = true
use_lockfile = true
# Remove dynamodb_table parameter
}
}
Run terraform init to reinitialize the backend. Terraform migrates the lock state from DynamoDB to S3.
Verify S3 Locking Is Active
# During terraform apply, open a second terminal
# Try to apply again — you will see an error about S3 PutObject
terraform apply
# Error message will reference S3 locks, not DynamoDB:
# Error: Error acquiring the state lock
# Error message: failed to acquire lock:
# s3 conditional put failed: PutObject operation was rejected
This error confirms S3 native locking is working. DynamoDB is not involved.
Phase 3: Delete DynamoDB (Cleanup)
# After successful production deploys for 1-2 weeks, delete the DynamoDB table
aws dynamodb delete-table --table-name terraform-locks --region us-east-1
# Also remove any IAM permissions that granted DynamoDB access
Handling Stuck Lock Files
If a lock file does not clear after an apply (process crash, network failure), use terraform force-unlock or delete the S3 lock file manually:
# Option 1: Use Terraform
terraform force-unlock prod-lock-id
# Option 2: Delete manually from S3
aws s3 rm s3://skillzmist-terraform-state/prod/terraform.tfstate.tflock
# Option 3: Check what is there
aws s3 ls s3://skillzmist-terraform-state/prod/
With DynamoDB, stuck locks required logging into the AWS console, finding the DynamoDB table, opening the item, and manually deleting a record. With S3, it is one S3 rm command.
What Changed Under the Hood
S3 Conditional Writes: The if-none-match header tells S3: create this object only if it does not exist already. If the object exists, the write fails atomically. Terraform wraps this in error handling and retry logic.
Lock File Format: S3 creates a .tfstate.tflock file (JSON format) containing lock metadata: who locked it, when, reason. This is identical to the DynamoDB LockID record but stored in S3.
Cost Impact: S3 PutObject costs $0.0075 per 10,000 requests. DynamoDB costs $1.25 per million write capacity units. For a team running 50 terraform applies per day, you save roughly $40/month by eliminating DynamoDB. Minor savings, but simpler infrastructure matters.
S3 Versioning Still Required: Enable versioning on your Terraform state bucket so you can rollback state to any point in history.
Common Mistakes to Avoid
- Migrating to S3 locking in production without testing in staging first. Test the migration in staging for 2-4 weeks before production cutover.
- Disabling S3 bucket versioning after removing DynamoDB. Versioning is still required for state rollback. Remove DynamoDB, keep versioning.
- Removing DynamoDB config BEFORE verifying S3 locking works. Use phase 1 to test with both systems. After verification, remove DynamoDB.
- Using Terraform version below 1.11 with use_lockfile=true. Use 1.11 GA, not experimental 1.10. GA is stable, experimental is not.
- Not updating IAM policies after removing DynamoDB. If policies still grant DynamoDB permissions, they are unnecessary. Clean them up to follow least-privilege.
- Forgetting to delete the DynamoDB table after migration. Teams keep paying for DynamoDB months after migration because they forgot to delete it.
- Not communicating the migration to the entire team before switching. Concurrent applies during migration are dangerous. Notify the team, migrate together.
Key Takeaways
- Terraform 1.11 S3 native locking is production-ready: Use it for all new projects. Use use_lockfile = true.
- Migration is safe with a phased approach: Enable S3 locking while keeping DynamoDB, test, then remove DynamoDB.
- Simpler infrastructure: One fewer AWS resource to manage, one fewer set of IAM permissions to maintain.
- S3 locking uses conditional writes: Atomic at the API level, no external database needed.
- Cost savings are minor but infrastructure simplicity matters: Save $40/month and hours of team onboarding.
Struggling with Terraform state management or planning a migration from DynamoDB locking? The Skillzmist team has solved this exact problem for engineering teams across the US, UK, and Europe. Reach out for a free technical consultation — we respond within 24 hours.
Related: The Terraform Folder Structure That Scales | 7 Terraform Problems Every DevOps Engineer Faces