State lock errors are one of the most common Terraform issues, and they're a direct consequence of the state-file architecture. This guide covers how to fix them, when force-unlock is safe, and how to prevent them in CI/CD. Infrastructure-from-code tools like Encore avoid this entirely by not using state files, but if you're working within Terraform today, here's how to get unblocked.
The error typically looks like:
Error: Error acquiring the state lock Error message: ConditionalCheckFailedException: The conditional request failed Lock Info: ID: a1b2c3d4-e5f6-7890-abcd-ef1234567890 Path: my-project/terraform.tfstate Operation: OperationTypeApply Who: user@hostname Version: 1.7.0 Created: 2026-03-19 14:23:01.123456 +0000 UTC Info: Terraform acquires a state lock to protect against two processes writing to the state at the same time. Please resolve the issue above and try again.
Here's how to fix it.
Terraform uses a state file to track the mapping between your HCL configuration and real infrastructure resources. When you run plan, apply, or destroy, Terraform writes to this file. The state lock prevents two operations from modifying the state file simultaneously, which would corrupt it and potentially leave resources in an inconsistent state.
For remote backends (S3, GCS, Terraform Cloud), the lock is typically implemented through a side channel. With S3, it's a DynamoDB table entry. With GCS, it's a .tflock file created in the same bucket as the state. The lock entry contains metadata about who acquired it and when.
A crashed or killed process. This is the most frequent cause. Your terraform apply was interrupted mid-run (Ctrl+C twice, killed terminal, OOM kill, SSH disconnect), and Terraform never got the chance to release the lock.
A CI/CD pipeline failure. Your GitHub Actions runner timed out, your Jenkins node died, or the pipeline was cancelled. Same outcome: the lock was acquired but never released.
Concurrent runs. Two developers ran terraform apply on the same workspace at the same time, or two CI pipelines triggered simultaneously. One got the lock, and the other got the error. This is the lock working as intended.
Stale lock entries. The DynamoDB item or GCS lock object persists after the process that created it is long gone. This happens when the backend's lock cleanup fails silently.
Before doing anything else, verify that no legitimate Terraform process is still running. Check the lock info in the error message. Look at the Who field and Created timestamp. If the lock was created seconds ago by a colleague, wait for their operation to finish.
# Check for running Terraform processes
ps aux | grep terraform
# If using CI/CD, check for active pipeline runs
# in your CI system's UI
If there's a running process that owns the lock, let it complete. Force-unlocking while another apply is in progress will corrupt your state.
Once you've confirmed no other process is running, use terraform force-unlock with the lock ID from the error message:
terraform force-unlock a1b2c3d4-e5f6-7890-abcd-ef1234567890
Terraform will ask for confirmation. Type yes. The lock will be released and you can retry your operation.
If you want to skip the confirmation prompt (useful in automation, but be careful):
terraform force-unlock -force a1b2c3d4-e5f6-7890-abcd-ef1234567890
Sometimes force-unlock fails because the lock metadata is in a bad state. For S3 backends with DynamoDB, you can clear the lock entry directly:
# Find the lock entry
aws dynamodb scan \
--table-name terraform-locks \
--filter-expression "LockID = :lid" \
--expression-attribute-values '{":lid":{"S":"my-project/terraform.tfstate"}}'
# Delete the stale lock
aws dynamodb delete-item \
--table-name terraform-locks \
--key '{"LockID": {"S": "my-project/terraform.tfstate"}}'
For GCS backends:
gsutil rm gs://my-terraform-bucket/my-project/default.tflock
After manually clearing the lock, run terraform plan first to verify state consistency before applying any changes.
After resolving the lock, make sure the state file wasn't corrupted by the interrupted operation:
terraform plan
Review the plan output carefully. If Terraform wants to recreate resources that already exist, your state may be out of sync. In that case, you may need to use terraform import to reconcile, or restore a previous state version from your backend's versioning (S3 versioning, GCS object versioning).
Single-pipeline enforcement. Configure your CI/CD to ensure only one Terraform operation runs per workspace at a time. In GitHub Actions, use concurrency groups:
concurrency:
group: terraform-${{ github.ref }}
cancel-in-progress: false # Don't cancel running applies
Graceful signal handling. If you're running Terraform in a wrapper script, make sure SIGTERM and SIGINT are forwarded so Terraform can clean up and release the lock before exiting.
Lock timeouts. Some backends support configuring how long to wait for a lock before failing. This won't prevent the error, but it helps when operations overlap slightly:
terraform apply -lock-timeout=5m
State file versioning. Always enable versioning on your state backend (S3 bucket versioning, GCS object versioning). This won't prevent locks, but it gives you a recovery path when things go wrong.
Smaller blast radius. Split large Terraform configurations into smaller, independent workspaces. Fewer resources per state file means shorter apply times, which means a smaller window for lock collisions and interrupted operations.
State lock errors are a symptom of a deeper architectural choice. Terraform needs an external state file because HCL configuration alone doesn't contain enough information to manage infrastructure. The state file tracks resource IDs, dependency ordering, and metadata that Terraform can't derive from configuration.
This creates an entire category of coordination problems: state locks, state drift, state corruption, and the operational burden of managing state backends. Every team running Terraform at scale has dealt with some combination of these issues.
The alternative model is infrastructure from code, where infrastructure requirements are declared within your application code and the tooling derives the infrastructure graph directly. There's no separate state file to lock because the application code itself is the source of truth. Platforms like Encore work this way: you declare resources like databases, caches, and pub/sub topics as part of your TypeScript or Go application, and infrastructure is provisioned automatically in your cloud account. There's no HCL to write, no state file to manage, and no locks to contend with.
That said, if you're working within Terraform today, the steps above should get you unblocked.