
Container orchestration covers a wide range of capabilities: scheduling, scaling, self-healing, service discovery, rolling deployments, config management, RBAC, network policies. Kubernetes bundles all of them into a single system. Most teams need a fraction of that list, and the operational cost scales with the full feature set regardless of how much you use.
Several tools provide the scheduling, scaling, and health-checking parts of orchestration without the rest of the Kubernetes ecosystem: Nomad, Docker Swarm, ECS, Cloud Run, and infrastructure-from-code platforms like Encore. This guide compares them.
Before comparing alternatives, it helps to be specific about what K8s orchestration actually provides:
Every alternative trades off some subset of these features for reduced operational overhead. The question is which trade-offs match your workload.
Nomad is a workload orchestrator that handles containers, VMs, Java apps, and standalone binaries. Where Kubernetes builds an entire platform around container orchestration, Nomad focuses on the scheduling and deployment layer and lets you compose everything else.
A Nomad cluster is a single binary on each node. No etcd, no separate API server, no controller manager. The job specification format is HCL (same as Terraform), and a basic deployment definition is 30-40 lines instead of the 100+ lines typical for a K8s Deployment + Service + Ingress.
What it provides: scheduling, rolling updates, health checks, service discovery (via Consul integration), multi-datacenter federation, support for non-container workloads. Nomad handles bin-packing and placement constraints, and its multi-region support is built in rather than bolted on.
What's missing vs K8s: no built-in network policies, no equivalent to the CRD/operator ecosystem, fewer third-party integrations. Secrets management requires Vault. Service mesh requires Consul Connect. Each is a separate tool to deploy and maintain, though they're designed to work together.
Who it fits: teams that need real orchestration across heterogeneous workloads and have the ops capacity to manage servers, but find K8s operator overhead disproportionate to their scale. Nomad's learning curve is measured in days rather than weeks.
Docker Swarm is Docker's built-in orchestration mode. Run docker swarm init, join worker nodes, and deploy stacks using files that look almost identical to docker-compose.yml. If you already know Docker Compose, you can have a multi-node cluster running in an afternoon.
What it provides: service scheduling, rolling updates, basic scaling, encrypted overlay networking between nodes, built-in service discovery via DNS. The deployment model is straightforward: define a service, set the replica count, and Swarm distributes containers across available nodes.
What's missing vs K8s: advanced scheduling constraints, horizontal pod autoscaling based on metrics, custom resource definitions, the entire operator ecosystem. There's no equivalent to Helm charts, no built-in support for canary deployments, and monitoring requires external tooling.
Who it fits: small internal deployments where the team already uses Docker Compose and wants clustering without learning a new tool. The critical caveat is that Swarm has been in maintenance mode for years. Docker's investment is elsewhere, the community has moved on, and new integrations aren't being built. It works today, but choosing it for a new project means building on a foundation with no roadmap.
ECS with Fargate handles container orchestration without exposing you to cluster management. You define task definitions (container images, CPU, memory), create services (desired count, load balancer, scaling rules), and Fargate runs them. No nodes to provision, patch, or scale.
What it provides: container scheduling, rolling deployments, auto-scaling (target tracking and step policies), service discovery via Cloud Map, integration with ALB for load balancing, IAM-based security model, and deep integration with other AWS services.
What's missing vs K8s: no CRD/operator model, no built-in service mesh (though App Mesh exists as a separate service), no multi-cloud portability. The configuration surface is still significant. A production ECS service involves task definitions, service definitions, target groups, security groups, IAM roles, and VPC configuration. You're trading Kubernetes YAML for CloudFormation or Terraform templates that can be equally verbose.
Who it fits: teams already on AWS who want managed container orchestration and are comfortable with AWS-native tooling. ECS removes the cluster management burden while keeping fine-grained control over networking, scaling, and IAM.
Cloud Run takes a different approach. Instead of giving you an orchestrator with fewer features than K8s, it removes the orchestration layer from your concerns entirely. Push a container image, Cloud Run handles scheduling, scaling (including to zero), load balancing, TLS, and health checks. There's nothing to configure beyond the container itself and a few scaling parameters.
What it provides: automatic scaling from zero to thousands of instances, per-request billing, built-in HTTPS with managed certificates, revision-based deployments with traffic splitting, and a deployment experience that's genuinely one command: gcloud run deploy.
What's missing vs K8s: persistent local storage, long-running background processes (beyond the request timeout), fine-grained scheduling control, any concept of node affinity or placement constraints. The model is strictly request-driven. If your workload doesn't fit the pattern of "receive request, process, respond," Cloud Run will push back.
Who it fits: teams building stateless HTTP services on GCP. For APIs, webhooks, and event-driven workloads, Cloud Run provides production-grade deployment with minimal configuration. The scale-to-zero model also makes it cost-effective for services with variable or low traffic.
| Feature | K8s | Nomad | Docker Swarm | ECS Fargate | Cloud Run |
|---|---|---|---|---|---|
| Container scheduling | Yes | Yes | Yes | Yes (managed) | Yes (managed) |
| Non-container workloads | Limited | Yes | No | No | No |
| Auto-scaling | HPA + custom metrics | External/manual | Manual | Target tracking | Automatic |
| Rolling updates | Yes | Yes | Yes | Yes | Revision-based |
| Service discovery | DNS-based | Via Consul | DNS-based | Via Cloud Map | Automatic |
| Scale to zero | With KEDA | No | No | No | Yes |
| Self-healing | Yes | Yes | Yes | Yes | Yes |
| Network policies | Yes | No (use Consul) | No | Security groups | Managed |
| CRD/Operator ecosystem | Yes | No | No | No | No |
| Cluster management | You manage | You manage | You manage | AWS manages | Google manages |
| Learning curve | Weeks | Days | Hours | Days | Hours |
There's another option that sidesteps container orchestration entirely. Instead of choosing which orchestrator to use, you let the infrastructure derive from what your code declares.
Encore is an open-source backend framework for TypeScript and Go. You define services, APIs, databases, pub/sub topics, and cron jobs as type-safe objects in your application code. When you deploy through Encore Cloud, it provisions the appropriate cloud resources in your own AWS or GCP account. The framework determines whether your workload runs on ECS, Cloud Run, or other managed services based on what your code needs.
import { api } from "encore.dev/api";
import { SQLDatabase } from "encore.dev/storage/sqldb";
import { Topic, Subscription } from "encore.dev/pubsub";
const db = new SQLDatabase("orders", { migrations: "./migrations" });
export const createOrder = api(
{ method: "POST", path: "/orders", expose: true },
async (req: CreateOrderRequest): Promise<Order> => {
const order = await db.queryRow<Order>`
INSERT INTO orders (item, qty) VALUES (${req.item}, ${req.qty})
RETURNING *`;
await orderCreated.publish({ orderID: order.id });
return order;
},
);
export const orderCreated = new Topic<OrderEvent>("order-created", {
deliveryGuarantee: "at-least-once",
});
That code produces a service with an API, a PostgreSQL database, and a pub/sub topic. Deployed to AWS, it becomes ECS Fargate, RDS, SQS, and the networking to connect them. There's no orchestrator to choose, no cluster to manage, and no task definitions or scaling policies to write. The trade-off is that you use the Encore framework, which means adoption or migration work for existing codebases.
Who it fits: teams building new backend projects in TypeScript or Go who want production infrastructure in their own cloud account without managing orchestration, Terraform, or Dockerfiles.
If you need multi-cloud portability, CRD-based extensibility, or you're running 50+ services across multiple teams, Kubernetes orchestration earns its complexity. If you need orchestration for mixed workloads (containers alongside VMs and bare processes), Nomad is the simpler option. For teams already on AWS who want managed container orchestration with full control, ECS Fargate removes the cluster burden. Cloud Run does the same on GCP with an even smaller configuration surface, as long as your workloads are stateless. And if the orchestration layer itself feels like the wrong abstraction for your team, infrastructure-from-code with Encore removes the question entirely by deriving infrastructure from application code.
The goal isn't to avoid Kubernetes on principle. It's to match the orchestration tool to the problem you have today, not the one you might have in two years.