
Getting a Postgres database running on AWS should be straightforward. You need a database, your app needs a connection string, and queries need to work. The actual experience ranges from 15 minutes of console clicking to weeks of Terraform debugging, depending on the approach you pick.
This guide compares three approaches: manual RDS setup, Terraform configuration, and infrastructure-from-code with Encore where a database is two lines of TypeScript.
The AWS Console lets you point-and-click your way to a running database. The steps go roughly like this:
After 10-15 minutes of provisioning, you have a database. Then you copy the endpoint, construct a connection string, and hardcode or inject it into your application.
This works for prototypes. The problem appears the moment you need a second environment. Repeating those steps for staging and production means clicking through the same screens again, hoping you match every setting. No record of what you configured or why. No way to review changes before they happen. When someone modifies a security group rule six months from now, nobody will know it was intentional.
Terraform solves the repeatability problem. You describe the database in HCL, commit it to version control, and terraform apply creates matching infrastructure. Here's what a basic RDS Postgres instance actually requires:
# VPC and networking (required for RDS)
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_subnet" "private_a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
}
resource "aws_subnet" "private_b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1b"
}
resource "aws_db_subnet_group" "main" {
name = "myapp-db"
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
# Security group
resource "aws_security_group" "db" {
name = "myapp-db-sg"
vpc_id = aws_vpc.main.id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
security_groups = [aws_security_group.app.id]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# Parameter group
resource "aws_db_parameter_group" "postgres" {
family = "postgres15"
name = "myapp-pg-params"
parameter {
name = "log_connections"
value = "1"
}
parameter {
name = "log_disconnections"
value = "1"
}
}
# The actual database instance
resource "aws_db_instance" "main" {
identifier = "myapp"
engine = "postgres"
engine_version = "15"
instance_class = "db.t3.micro"
allocated_storage = 20
max_allocated_storage = 100
storage_type = "gp3"
storage_encrypted = true
db_name = "myapp"
username = "dbadmin"
password = var.db_password
db_subnet_group_name = aws_db_subnet_group.main.name
vpc_security_group_ids = [aws_security_group.db.id]
parameter_group_name = aws_db_parameter_group.postgres.name
backup_retention_period = 7
skip_final_snapshot = false
multi_az = false
publicly_accessible = false
tags = {
Environment = "production"
ManagedBy = "terraform"
}
}
output "db_endpoint" {
value = aws_db_instance.main.endpoint
}
That's around 100 lines, and it's a minimal setup. A production configuration would add NAT gateways, route tables, KMS keys for encryption, IAM roles for enhanced monitoring, CloudWatch alarms, and read replicas. You'd also need a backend configuration for state storage (typically an S3 bucket and DynamoDB table, themselves requiring Terraform to create).
This HCL describes a database, but you're responsible for keeping it in sync with your application. If your app adds a new service that needs its own database, you write more Terraform. If you rename a service, you update the Terraform and hope the state migration goes smoothly. Schema migrations live in a completely separate system. And every developer on the team needs to understand enough HCL and AWS networking to review changes safely.
Terraform is the right tool when a dedicated platform team manages infrastructure as a separate concern. For teams where developers own the full stack, it adds a layer of complexity that scales with every new resource.
There's a third approach where the database is declared as part of your application code, and the infrastructure gets provisioned automatically when you deploy.
With Encore, a Postgres database looks like this:
import { SQLDatabase } from "encore.dev/storage/sqldb";
const db = new SQLDatabase("mydb", {
migrations: "./migrations",
});
Your migration files go in the ./migrations directory:
-- migrations/001_create_tables.up.sql
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL UNIQUE,
created_at TIMESTAMP DEFAULT NOW()
);
That's the entire database definition. When you deploy, Encore's build system analyzes your code, sees the SQLDatabase declaration, and provisions a fully configured RDS Postgres instance in your AWS account. It handles the VPC, subnets, security groups, parameter groups, credentials, and connection pooling. The same 100+ lines of Terraform from above get generated and applied behind the scenes.
Your application code uses the database directly:
import { api } from "encore.dev/api";
import { db } from "./db";
export const getUser = api(
{ expose: true, method: "GET", path: "/users/:id" },
async ({ id }: { id: string }) => {
return db.queryRow`SELECT * FROM users WHERE id = ${id}`;
}
);
No connection strings to manage. No environment variables to configure. The framework handles credential injection at runtime.
The real difference is in the ongoing maintenance, not just the initial setup.
Manual RDS puts everything on you: provisioning, security, backups, monitoring, credential rotation, environment parity, and remembering what you configured six months ago.
Terraform codifies the provisioning but leaves you managing state files, planning and reviewing infrastructure changes separately from application changes, running migrations through a different tool, and wiring credentials into your deployment pipeline. When drift happens (and it does, someone always edits something in the console), you're left reconciling state. If you want a deeper look at that problem, see Terraform Drift: Why It Happens and How to Fix It.
Infrastructure from code ties the database lifecycle to your application code. Add a SQLDatabase declaration, push, and the database exists. Remove it, and the next deploy cleans it up. Migrations run as part of deployment. Credentials are injected automatically. Every environment gets an isolated database with the same configuration. There's no separate infrastructure repository to keep in sync.
Encore's approach works when your infrastructure needs align with what the framework supports: Postgres databases, Pub/Sub topics, cron jobs, object storage, caching. For teams building backend services and APIs, that covers the common cases.
If you need a Neptune graph database, a custom VPC peering setup, or resources Encore doesn't model, Terraform gives you the flexibility to define anything AWS offers. You can also combine approaches: use Encore for the standard infrastructure and drop to Terraform for the edge cases, or run encore infra generate to export Terraform files when you need full control.
For the specific case of deploying Postgres to AWS, the question is whether you want to write and maintain 100 lines of HCL for something your application code already describes.
curl -L https://encore.dev/install.sh | bash
encore app create my-api
cd my-api
encore run
encore run gives you a local Postgres database with zero configuration. When you're ready to deploy to AWS, connect your account through Encore Cloud and git push.