
Most teams that adopt Kubernetes have a rough sense of what the compute will cost. Fewer have a clear picture of the total bill once everything is running. The gap between "a few hundred a month for nodes" and the actual monthly spend tends to surprise people, because the big-ticket items aren't the ones you see on the pricing page.
This is a line-by-line breakdown of what a small production Kubernetes cluster costs, from the obvious charges to the ones that only show up after you've committed. It also compares with alternatives like managed containers, PaaS, and infrastructure-from-code platforms like Encore Cloud.
Every managed Kubernetes service charges for the control plane, the API server, etcd, scheduler, and controller manager that keep the cluster running.
EKS on AWS costs $73 per month per cluster. GKE offers a free tier for one Autopilot or zonal cluster, then charges $73 per month for standard clusters. AKS on Azure is free for the Free tier, $73 per month for the Standard tier (which includes the uptime SLA).
Most production setups run at least two clusters (one for staging, one for production), which puts the control plane cost at $146 per month before any workload runs.
Monthly cost: ~$150
Worker nodes are where your containers actually run. A minimal production setup with high availability across two or three availability zones requires at least three nodes. For a small application (a handful of services, a few hundred requests per second), nodes in the m5.large or e2-standard-4 range are common.
Three m5.large instances on AWS run about $210 per month with on-demand pricing. In practice, you'll have more. Kubernetes system components (CoreDNS, kube-proxy, monitoring agents) consume resources on every node, which means you're paying for capacity your application doesn't use. Teams typically end up with 4-6 nodes to handle system overhead plus enough headroom for deployments and pod scheduling.
With reserved instances or committed use discounts, a small cluster with 4-6 nodes runs $500-800 per month. On-demand pricing pushes that to $800-1,500 depending on the instance type and region. Larger workloads with higher CPU or memory requirements can reach $2,000 or more.
Monthly cost: $500-2,000
Networking costs on Kubernetes are notoriously hard to predict because they come from multiple sources.
NAT Gateways: If your pods need to reach the internet (to call external APIs, pull container images, send webhooks), traffic goes through a NAT Gateway. On AWS, that's $32 per month per gateway plus $0.045 per GB of processed data. Running NAT Gateways in two availability zones for high availability costs $64 per month before data transfer. A moderate traffic workload processing 500 GB per month adds another $22.50 in data charges.
Load Balancers: Each externally exposed service needs a load balancer. A single AWS Application Load Balancer runs about $22 per month plus data processing charges. Most clusters need at least one, and many end up with two or three as the number of services grows.
Cross-AZ data transfer: Kubernetes spreads pods across availability zones for resilience, which means pods frequently talk to other pods in different zones. AWS charges $0.01 per GB for cross-AZ traffic. This cost is invisible until you look at the bill. For a cluster with moderate inter-service communication, cross-AZ transfer alone can add $50-150 per month.
Monthly cost: $150-400
A bare Kubernetes cluster doesn't include much of what you need to run production workloads. The ecosystem expects you to bring your own observability, certificate management, ingress, and deployment tooling.
A typical production stack includes:
Self-hosting the monitoring and logging stack saves on vendor fees but costs in cluster resources and maintenance time. Using managed services simplifies operations but adds to the bill. Either way, expect to spend $200-500 per month on tooling for a small cluster, whether that cost shows up as vendor invoices or as additional compute to run the self-hosted stack.
Monthly cost: $200-500
Everything above adds up to roughly $1,000-3,000 per month in infrastructure costs for a small production cluster. That's meaningful, but it's not the number that matters most.
The bigger cost is the person keeping it running. Kubernetes clusters need ongoing attention. Version upgrades (Kubernetes releases three minor versions per year), node patching, debugging pod scheduling issues, tuning resource requests and limits, updating Helm charts, investigating why a deployment rolled back, figuring out why cross-service DNS resolution broke after a CoreDNS update. For a small team, these tasks typically consume 20-40% of one engineer's time.
A platform engineer in the US costs $150,000-250,000 per year in total compensation. At 20-40% of their time, that's $30,000-100,000 per year dedicated to Kubernetes operations, or roughly $2,500-8,000 per month.
This is the cost that doesn't appear on any cloud bill but dominates the total.
Monthly cost: $2,500-8,000
| Category | Monthly cost |
|---|---|
| Control plane (2 clusters) | $150 |
| Compute (4-6 nodes) | $500-2,000 |
| Networking | $150-400 |
| Tooling and add-ons | $200-500 |
| Engineer time (20-40%) | $2,500-8,000 |
| Total | $3,500-11,050 |
For a minimal Kubernetes setup running a small application, the realistic monthly cost is somewhere between $3,500 and $11,000, depending on how much you self-host, what region you deploy in, and how senior your platform engineer is. The midpoint, around $6,000-7,000 per month, is a reasonable estimate for most small teams.
That's roughly $72,000-84,000 per year to run infrastructure for an application that might only need a database, a few services, and a message queue.
| Approach | Typical monthly cost | What you manage | Best for |
|---|---|---|---|
| Kubernetes (self-managed on EKS/GKE) | $3,500-11,000 | Everything: cluster, networking, monitoring, upgrades, deployments | Large-scale microservices, teams with platform engineering capacity |
| Managed containers (ECS Fargate, Cloud Run) | $1,500-4,000 | Task definitions, networking, monitoring | Teams with cloud experience who want to skip cluster management |
| PaaS (Render, Railway, Fly.io) | $500-3,000 | Application code, some configuration | Small teams that want managed hosting without AWS expertise |
| Infrastructure-from-code (Encore Cloud) | $49/member/mo + cloud costs | Application code | Teams that want production infrastructure in their own cloud account without managing it |
Managed container services like ECS Fargate or Cloud Run eliminate the cluster management layer entirely. You still configure networking, monitoring, and deployments, but the control plane, node management, and most of the Kubernetes-specific tooling disappear. For small-to-medium workloads, that cuts 40-60% off the total cost.
PaaS platforms abstract away more of the infrastructure, though the pricing models vary widely and some deploy into their own accounts rather than yours.
Infrastructure-from-code takes a different approach. With Encore Cloud, your application code declares what infrastructure it needs (databases, queues, cron jobs, storage), and the platform provisions standard cloud resources in your own AWS or GCP account. There's no cluster to manage, no YAML to maintain, no monitoring stack to configure. The platform cost is $49 per team member per month, and you pay standard cloud pricing for the underlying resources. For a small team of three, that's $147 per month in platform fees plus whatever the provisioned AWS or GCP resources cost, which for a comparable workload is typically $200-500 per month in compute and managed services.
Kubernetes is worth its cost when you have the scale to justify it. If you're running 20+ services with complex networking requirements, deploying across multiple regions, or operating in a regulated industry where you need fine-grained control over every aspect of the infrastructure, the platform engineering investment pays for itself.
For most applications with fewer than ten services, the alternatives deliver the same reliability at a fraction of the cost and operational burden. The question isn't whether Kubernetes is capable. It's whether the problem you're solving is large enough to warrant what you'll spend to run it.