When a request hits your API, it might call two other services, run a database query, publish a message to a queue, and make an outgoing HTTP request. If something goes wrong (a slow response, a failed query, a timeout), you need to know exactly where the problem is. That's what distributed tracing gives you: a complete picture of a request's journey through your system.
This guide walks through two ways to add distributed tracing to a TypeScript REST API. We'll start with the manual OpenTelemetry approach, then look at how modern frameworks can eliminate most of this instrumentation work entirely.
A trace follows a single request across every service, database call, and external request it touches. Each operation becomes a "span" with timing data, and spans nest to show the full call tree. You can follow a request across services, find which database query or service call is the bottleneck, debug errors in context, and see which services depend on each other.
Without tracing, debugging a distributed system means correlating timestamps across log files from different services. With tracing, you click a request and see everything.
OpenTelemetry is the standard for distributed tracing. It's vendor-neutral, well-supported, and works with most observability backends (Jaeger, Zipkin, Datadog, Grafana Tempo). The tradeoff is setup complexity.
Here's what a full OpenTelemetry setup looks like for a TypeScript API.
npm install @opentelemetry/api \ @opentelemetry/sdk-node \ @opentelemetry/sdk-trace-node \ @opentelemetry/exporter-trace-otlp-http \ @opentelemetry/resources \ @opentelemetry/semantic-conventions \ @opentelemetry/instrumentation-http \ @opentelemetry/instrumentation-express \ @opentelemetry/instrumentation-pg
That's nine packages before you write a line of application code.
Create a tracing setup file that initializes before your application starts:
// tracing.ts
import { NodeSDK } from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { Resource } from "@opentelemetry/resources";
import {
ATTR_SERVICE_NAME,
ATTR_SERVICE_VERSION,
} from "@opentelemetry/semantic-conventions";
import { HttpInstrumentation } from "@opentelemetry/instrumentation-http";
import { ExpressInstrumentation } from "@opentelemetry/instrumentation-express";
import { PgInstrumentation } from "@opentelemetry/instrumentation-pg";
const sdk = new NodeSDK({
resource: new Resource({
[ATTR_SERVICE_NAME]: "my-api",
[ATTR_SERVICE_VERSION]: "1.0.0",
}),
traceExporter: new OTLPTraceExporter({
url: "http://localhost:4318/v1/traces",
}),
instrumentations: [
new HttpInstrumentation(),
new ExpressInstrumentation(),
new PgInstrumentation(),
],
});
sdk.start();
process.on("SIGTERM", () => {
sdk.shutdown().then(() => process.exit(0));
});
This file must load before any other imports. In package.json:
{
"scripts": {
"start": "node --require ./tracing.js dist/index.js"
}
}
Auto-instrumentation covers HTTP and database calls, but your business logic needs manual spans:
import { trace, SpanStatusCode } from "@opentelemetry/api";
import express from "express";
const app = express();
const tracer = trace.getTracer("my-api");
app.post("/orders", async (req, res) => {
const span = tracer.startSpan("create-order");
try {
// Validate input
const validationSpan = tracer.startSpan("validate-order", {
attributes: { "order.items": req.body.items?.length },
});
const order = validateOrder(req.body);
validationSpan.end();
// Check inventory
const inventorySpan = tracer.startSpan("check-inventory");
await checkInventory(order.items);
inventorySpan.end();
// Save to database
const dbSpan = tracer.startSpan("save-order");
const saved = await saveOrder(order);
dbSpan.setAttribute("order.id", saved.id);
dbSpan.end();
// Publish event
const publishSpan = tracer.startSpan("publish-order-created");
await publishOrderCreated(saved);
publishSpan.end();
span.setStatus({ code: SpanStatusCode.OK });
res.json(saved);
} catch (error) {
span.setStatus({
code: SpanStatusCode.ERROR,
message: error instanceof Error ? error.message : "Unknown error",
});
span.recordException(error as Error);
res.status(500).json({ error: "Failed to create order" });
} finally {
span.end();
}
});
Every operation gets wrapped in span creation and cleanup. The error handling has to set span status, record the exception, and still end the span. Miss any of these and you get incomplete or leaked traces.
When one service calls another, you need to propagate the trace context so spans connect into a single trace:
import { context, propagation } from "@opentelemetry/api";
async function callInventoryService(items: Item[]): Promise<void> {
const headers: Record<string, string> = {};
// Inject trace context into outgoing request headers
propagation.inject(context.active(), headers);
await fetch("http://inventory-service/check", {
method: "POST",
headers: {
"Content-Type": "application/json",
...headers, // Includes traceparent, tracestate headers
},
body: JSON.stringify({ items }),
});
}
The receiving service needs to extract that context and use it as the parent for its own spans. Every service-to-service call requires this propagation.
You also need something to receive the traces. Locally, that means running Jaeger or a collector:
docker run -d --name jaeger \ -p 16686:16686 \ -p 4318:4318 \ jaegertracing/all-in-one:latest
Then open http://localhost:16686 to view traces.
OpenTelemetry is a solid standard and works with any backend. But the setup involves:
For a single service this is manageable. For a system with ten services, it's a lot of boilerplate to maintain, and any gap in instrumentation means missing data.
Encore.ts takes a different approach. Instead of instrumenting your code manually, the Encore runtime traces everything automatically. The same order creation endpoint looks like this:
// orders/orders.ts
import { api, APIError } from "encore.dev/api";
import { orders } from "./db";
interface Order {
id: string;
items: OrderItem[];
total: number;
createdAt: Date;
}
interface CreateOrderRequest {
items: OrderItem[];
}
export const create = api(
{ expose: true, method: "POST", path: "/orders" },
async (req: CreateOrderRequest): Promise<Order> => {
const validated = validateOrder(req);
await checkInventory(validated.items);
const order = await orders.queryRow<Order>`
INSERT INTO orders (items, total)
VALUES (${JSON.stringify(validated.items)}, ${validated.total})
RETURNING id, items, total, created_at as "createdAt"
`;
await orderCreated.publish({ orderId: order!.id });
return order!;
}
);
Everything is traced automatically. The trace appears in the local dashboard the moment you call this endpoint, with full timing data and request/response payloads.
When this endpoint calls another service, the trace context propagates automatically:
// orders/orders.ts
import { inventory } from "~encore/clients";
async function checkInventory(items: OrderItem[]): Promise<void> {
const result = await inventory.check({ items });
if (!result.available) {
throw APIError.failedPrecondition("items not in stock");
}
}
The call to inventory.check() is a service-to-service call. Encore tracks it as a child span in the same trace, with full request/response data, without any instrumentation on your part.
Encore's runtime instruments these operations out of the box:
Each operation captures stack traces so you can click any span and see the exact line of code that triggered it.
When you run encore run, a local development dashboard starts at localhost:9400. Every request to your API generates a trace you can inspect immediately.
Click any request to see a timeline of all operations: nested spans showing the call tree, request/response data for each API call, SQL queries with parameters and execution time, pub/sub messages, and errors with stack traces. For multi-service applications, the trace shows the complete request path across all services with timing for each step.
The trace viewer is built in, so there's nothing extra to run or install.
When you deploy with Encore Cloud, traces are collected and stored automatically. The Trace Explorer lets you:
Traces from Preview Environments are included, so you can debug issues in a PR before it reaches production.
Install Encore and see traces in under five minutes.
# Install CLI (macOS)
brew install encoredev/tap/encore
# Or Linux / Windows (WSL)
curl -L https://encore.dev/install.sh | bash
# Create a project
encore app create tracing-demo --example=ts/hello-world
cd tracing-demo
# Run it
encore run
Make a request:
curl http://localhost:4000/hello/World
Open http://localhost:9400 and click the request. You'll see the trace with timing data, request/response payloads, and any database queries or service calls. Everything works out of the box.
To see cross-service tracing, try the multi-service starter or add a second service and call it using Encore's generated clients. The trace will show the full call chain automatically.
| OpenTelemetry | Encore | |
|---|---|---|
| Setup | SDK config, exporters, collector | None |
| Instrumentation | Manual spans + auto-instrumentation packages | Automatic for all operations |
| Context propagation | Manual injection/extraction | Automatic across service calls |
| Local trace viewer | Run Jaeger/Zipkin in Docker | Built-in at localhost:9400 |
| New service added | Configure tracing for each service | Traces work immediately |
| Maintenance | Update packages, fix instrumentation gaps | None |
OpenTelemetry gives you flexibility and vendor choice. Encore gives you complete traces from the first line of code. If you're building with Encore, you get distributed tracing as a built-in capability rather than an integration project.
Have questions? Join our Discord community where developers help each other daily.