Every few months, a blog post or conference talk declares that infrastructure as code is dead. The arguments have been circulating since at least 2023, but they've picked up momentum recently because the conditions that made IaC necessary are shifting under teams that still rely on it.
IaC isn't dead. But the role it plays is getting smaller, and the problems it was designed to solve are increasingly handled by other approaches. Understanding why requires looking at what IaC got right, what changed, and where the friction now lives.
Before Terraform and its predecessors, provisioning infrastructure meant clicking through cloud consoles or running ad-hoc scripts that nobody else could reproduce. If a staging environment drifted from production, you might not find out until something broke. If a team member left, their knowledge of how the infrastructure was configured often left with them.
IaC addressed this by treating infrastructure definitions as source code. You wrote your VPC, your database, your load balancer in a configuration language, checked it into version control, and applied it through a pipeline. The benefits were real and significant: reproducible environments, peer-reviewed changes, rollback capability, and a single source of truth for what was deployed.
For organizations managing dozens or hundreds of cloud resources across multiple environments, this was transformative. It replaced tribal knowledge with reviewable, diffable, versionable files. Teams could onboard new members by pointing them at a repository instead of a wiki that was six months out of date.
The "IaC is dead" argument isn't really about whether configuration-as-code was a good idea. It's about whether the specific implementation pattern that Terraform popularized still fits how teams build software in 2026.
AI is generating application code faster than teams can write Terraform. When a developer uses Cursor or Claude Code to build a new service in an afternoon, the bottleneck isn't writing the business logic anymore. It's the infrastructure work that follows: writing the Terraform for a new database, updating IAM policies, configuring the deployment pipeline. AI tools can generate application code fluently, but they struggle with Terraform because HCL has less training data, the configurations are highly environment-specific, and small mistakes cause cascading failures during apply.
State management remains a persistent source of pain. Terraform's state file tracks the mapping between your configuration and real cloud resources. It has to be stored remotely, locked during operations, and occasionally surgically edited when things go wrong. State lock errors, drift between state and reality, and corrupted state files are familiar problems for any team that's used Terraform at scale. The state file is an implementation necessity, not a feature anyone asked for.
The two-codebase problem creates friction. Your application code lives in one place and your infrastructure code lives in another. When you add a new service that needs a database, you make changes in two repositories (or two directories), often reviewed by two different people, deployed through two different pipelines. Keeping these in sync is a coordination problem that grows with team size. It's a particular problem with AI-assisted development, where generated application code frequently assumes infrastructure that hasn't been provisioned yet.
The HashiCorp acquisition added uncertainty. IBM's acquisition of HashiCorp and the earlier BSL license change prompted the OpenTofu fork and raised questions about the long-term direction of the Terraform ecosystem. Some teams have migrated to OpenTofu, others to Pulumi, and others have started exploring whether they need a separate infrastructure tool at all. The ecosystem is more fragmented than it was two years ago, and that fragmentation has made people reconsider their defaults.
Infrastructure from code (IFC) takes a different approach. Instead of maintaining a separate set of configuration files that describe your infrastructure, you declare infrastructure needs directly in your application code using your application's programming language.
When you write this in a TypeScript service:
import { SQLDatabase } from "encore.dev/storage/sqldb";
const db = new SQLDatabase("users", {
migrations: "./migrations",
});
The framework understands that this service needs a PostgreSQL database. Locally, it provisions one in Docker. When you deploy to AWS, it creates an RDS instance with appropriate networking, security groups, and credentials. When you deploy to GCP, it creates a Cloud SQL instance instead. There are no separate configuration files, no state file to manage, and no HCL to learn.
The same pattern applies to other infrastructure primitives: pub/sub topics, cron jobs, object storage, caches. Each one is declared in the code that uses it, and the tooling handles provisioning for whatever environment you're targeting.
This matters because it removes the synchronization problem entirely. The infrastructure definition and the application code are the same artifact. There's no second repository to update, no state file that can drift, no separate deployment pipeline for infrastructure changes. When you push code that declares a new database, the database gets provisioned as part of the same deployment.
Declaring IaC dead ignores the reality of most large organizations. There are situations where a separate infrastructure layer is the right tool.
Large existing estates. If you have 500 Terraform modules managing thousands of resources across multiple AWS accounts, migrating away from that isn't a weekend project. The investment in IaC is already made, the team knows the tooling, and the risk of changing approaches is higher than the friction of continuing. Incremental improvement through better modules, policy enforcement, and CI practices is often the pragmatic path.
Multi-tool environments. Not everything is an application backend. Networking infrastructure, DNS configuration, CDN rules, compliance-driven security group setups, and other concerns that aren't tied to a single application still benefit from explicit, reviewable configuration files. IaC tools are good at managing resources that exist outside any particular application's scope.
Compliance and audit requirements. Some organizations need explicit, pre-approved infrastructure definitions that go through a formal review process before deployment. The separation between infrastructure code and application code is a feature in this context, not a bug. It creates a clear boundary where security and compliance teams can review and approve changes independently.
Specialized or unusual resources. IFC tools support common infrastructure patterns, databases, queues, caches, storage, but they don't cover every possible cloud service. If your application depends on a niche AWS service or requires a specific configuration that the IFC tool doesn't expose, you'll still need a way to declare that infrastructure explicitly.
For a growing set of use cases, infrastructure from code removes friction that IaC creates without adding new constraints.
New projects and greenfield development. Starting a project today doesn't have to mean spending a week writing Terraform before your first endpoint works. With IFC, you write your application code, the infrastructure follows, and you can deploy to a real cloud environment in the time it takes to write the first few services. The deployment path is significantly shorter.
Small teams without DevOps specialists. A team of three backend developers shouldn't need to learn HCL and AWS networking to ship a product. IFC lets developers who know TypeScript or Go deploy to production-grade infrastructure without acquiring a separate skill set. The infrastructure expertise is embedded in the tooling instead of required from the team.
AI-assisted development workflows. When an AI agent generates a new service with a database and a pub/sub topic, that code is deployable as-is with IFC. No one needs to go write matching Terraform. The infrastructure declarations are already in the generated code, using the same language and type system the AI was already working with. This is where the gap between IaC and IFC becomes most visible: AI tools can generate complete, deployable backend services without a separate infrastructure step.
Rapid iteration and prototyping. Spinning up a new environment, testing a different architecture, or experimenting with additional services is low-cost when infrastructure provisioning is automatic. You don't accumulate Terraform modules that need to be maintained for experiments that might not survive the week.
Infrastructure as code isn't dead, but its scope is narrowing. The pattern of maintaining separate configuration files for infrastructure made sense when the alternative was manual provisioning and there was no better option. For many teams, that's still the situation they're in, and IaC continues to serve them well.
For teams starting new projects, working without dedicated DevOps, or building with AI tools that generate code faster than infrastructure can be provisioned manually, infrastructure from code is a simpler model. It solves the same underlying problems (reproducibility, version control, consistency) without the state management overhead and two-codebase friction that come with traditional IaC.
The question isn't whether IaC is dead. It's whether the tradeoffs that made IaC the default still apply to the project you're working on today.