
The "resource already exists" error happens when Terraform tries to create something that's already in your cloud account but not in its state file. It's a state synchronization problem, and it's one of the more common Terraform issues. This guide covers why it happens, how to fix it, and how infrastructure-from-code tools like Encore avoid state conflicts by not maintaining a separate state file.
The error typically looks like this:
Error: creating EC2 Instance: IdempotentParameterMismatch:
An instance with the same client token already exists.
Error: error creating S3 Bucket (my-app-uploads):
BucketAlreadyOwnedByYou: Your previous request to create the
named bucket succeeded and you already own it.
Error: error creating Security Group (my-app-sg):
InvalidGroup.Duplicate: A security group with the same name
already exists in this VPC.
The resource exists in your cloud account, but Terraform doesn't know about it. So Terraform tries to create it from scratch, and the cloud provider rejects the request. The fix depends on how the state got out of sync in the first place.
Terraform tracks every resource it manages in a state file. When you define an aws_s3_bucket in your config and run apply, Terraform creates the bucket and records its ID, ARN, and attributes in state. On subsequent runs, Terraform uses that state entry to know the bucket already exists and should be updated rather than recreated.
The "resource already exists" error means the state file is missing an entry for a resource that's already deployed. There are a few common ways this happens.
A previous apply was interrupted. Terraform created the resource in AWS but crashed or was killed before writing the result to state. The resource exists in your account, but the state file has no record of it. The next apply tries to create it again.
Someone created the resource manually. A developer logged into the AWS console during an incident and created a security group or database directly. Later, someone adds the same resource to the Terraform configuration. On apply, the cloud provider says it already exists.
The state entry was removed but the resource wasn't destroyed. Running terraform state rm removes a resource from tracking without deleting the actual infrastructure. If you later re-add that resource to your config, Terraform will try to create it fresh and hit the duplicate error.
A terraform import was never completed. You started bringing existing infrastructure under Terraform management, wrote the config, but forgot to run the import. Or the import was run against a different workspace or state file.
State file was restored from a backup. If you rolled back your state file to an earlier version (to recover from corruption, for example), any resources created after that backup point will exist in your cloud account but not in state.
This is the most common fix. You're telling Terraform "this resource in my config corresponds to that resource in my cloud account."
terraform import aws_s3_bucket.uploads my-app-uploads
terraform import aws_security_group.app sg-0a1b2c3d4e5f67890
terraform import aws_db_instance.main my-database-identifier
The general syntax is terraform import <resource_address> <cloud_resource_id>. The resource address is the Terraform identifier from your .tf file. The cloud resource ID varies by provider and resource type. Check the Terraform provider docs for the import syntax of each resource.
After importing, run terraform plan to check whether your config matches the actual resource. If there are differences, update your .tf files to match reality, or accept that Terraform will modify the resource on the next apply.
terraform plan
If the plan shows no changes, the import was clean. If it shows modifications, review them before applying.
Sometimes the state file has a broken reference to a resource. The entry exists in state but points to the wrong ID, or the resource was replaced outside Terraform. In this case, remove the stale entry first, then import the correct resource.
terraform state rm aws_instance.app_server
terraform import aws_instance.app_server i-0abc123def456789
Use terraform state list and terraform state show to inspect what's currently tracked:
# List all resources in state
terraform state list
# Show details for a specific resource
terraform state show aws_s3_bucket.uploads
If you don't need Terraform to manage the existing resource, change the resource name in your config so there's no collision. For resources where the cloud provider enforces unique names (S3 buckets, security groups within a VPC), give the new resource a different name:
resource "aws_s3_bucket" "uploads" {
bucket = "my-app-uploads-v2" # Different name to avoid conflict
}
This creates a new resource alongside the existing one. You'll need to migrate any data or references from the old resource to the new one.
If the existing resource was created by mistake or is expendable, delete it through the cloud console or CLI, then let terraform apply create it fresh:
aws s3 rb s3://my-app-uploads # Only if the bucket is empty and expendable
terraform apply
Be careful with this approach. Deleting databases, buckets with data, or resources that other services depend on can cause outages.
Don't interrupt terraform apply. If you need to cancel, send a single Ctrl+C and let Terraform finish its current operation gracefully. A second Ctrl+C force-kills the process and risks the state inconsistency described above.
Restrict manual changes to production infrastructure. Use IAM policies to give most team members read-only console access. When manual changes are necessary, document them and update the Terraform config promptly.
Run terraform plan in CI before every merge. This catches state mismatches early. If the plan shows Terraform trying to create a resource that should already exist, you know the state is out of sync before it blocks a deploy.
Enable state file versioning. With S3 bucket versioning or GCS object versioning, you can recover previous state versions if a bad apply corrupts the current one, rather than losing track of resources entirely.
Use terraform import blocks (Terraform 1.5+). Instead of running import as a CLI command, declare imports in your configuration. This makes imports part of the normal plan/apply workflow and reviewable in version control:
import {
to = aws_s3_bucket.uploads
id = "my-app-uploads"
}
Every one of these scenarios traces back to the same thing: a state file that's out of sync with reality. The state file is a coordination artifact that sits between your code and your infrastructure, and keeping it accurate is a permanent operational responsibility. Lock errors, drift, and "resource already exists" are all different manifestations of the same architectural tradeoff.
Infrastructure-from-code tools like Encore avoid this class of problems entirely. You declare infrastructure resources (databases, pub/sub topics, caches, object storage) as part of your TypeScript or Go application code. When you deploy, the platform reads your code, determines what infrastructure is needed, and provisions it in your AWS or GCP account. There's no separate state file to corrupt, lose, or get out of sync.
If you're using Terraform today, the import/state rm workflow above will get you unblocked. But if state management is consuming real team time, it's worth knowing that alternatives exist.