02/20/26

How to Add Distributed Tracing to a TypeScript REST API

From zero visibility to full request tracing without instrumentation code

8 Min Read

When a request hits your API, it might call two other services, run a database query, publish a message to a queue, and make an outgoing HTTP request. If something goes wrong (a slow response, a failed query, a timeout), you need to know exactly where the problem is. That's what distributed tracing gives you: a complete picture of a request's journey through your system.

This guide walks through two ways to add distributed tracing to a TypeScript REST API. We'll start with the manual OpenTelemetry approach, then look at how modern frameworks can eliminate most of this instrumentation work entirely.

What Distributed Tracing Shows You

A trace follows a single request across every service, database call, and external request it touches. Each operation becomes a "span" with timing data, and spans nest to show the full call tree. You can follow a request across services, find which database query or service call is the bottleneck, debug errors in context, and see which services depend on each other.

Without tracing, debugging a distributed system means correlating timestamps across log files from different services. With tracing, you click a request and see everything.

The OpenTelemetry Approach

OpenTelemetry is the standard for distributed tracing. It's vendor-neutral, well-supported, and works with most observability backends (Jaeger, Zipkin, Datadog, Grafana Tempo). The tradeoff is setup complexity.

Here's what a full OpenTelemetry setup looks like for a TypeScript API.

Install Dependencies

npm install @opentelemetry/api \ @opentelemetry/sdk-node \ @opentelemetry/sdk-trace-node \ @opentelemetry/exporter-trace-otlp-http \ @opentelemetry/resources \ @opentelemetry/semantic-conventions \ @opentelemetry/instrumentation-http \ @opentelemetry/instrumentation-express \ @opentelemetry/instrumentation-pg

That's nine packages before you write a line of application code.

Configure the SDK

Create a tracing setup file that initializes before your application starts:

// tracing.ts import { NodeSDK } from "@opentelemetry/sdk-node"; import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http"; import { Resource } from "@opentelemetry/resources"; import { ATTR_SERVICE_NAME, ATTR_SERVICE_VERSION, } from "@opentelemetry/semantic-conventions"; import { HttpInstrumentation } from "@opentelemetry/instrumentation-http"; import { ExpressInstrumentation } from "@opentelemetry/instrumentation-express"; import { PgInstrumentation } from "@opentelemetry/instrumentation-pg"; const sdk = new NodeSDK({ resource: new Resource({ [ATTR_SERVICE_NAME]: "my-api", [ATTR_SERVICE_VERSION]: "1.0.0", }), traceExporter: new OTLPTraceExporter({ url: "http://localhost:4318/v1/traces", }), instrumentations: [ new HttpInstrumentation(), new ExpressInstrumentation(), new PgInstrumentation(), ], }); sdk.start(); process.on("SIGTERM", () => { sdk.shutdown().then(() => process.exit(0)); });

This file must load before any other imports. In package.json:

{ "scripts": { "start": "node --require ./tracing.js dist/index.js" } }

Add Manual Spans

Auto-instrumentation covers HTTP and database calls, but your business logic needs manual spans:

import { trace, SpanStatusCode } from "@opentelemetry/api"; import express from "express"; const app = express(); const tracer = trace.getTracer("my-api"); app.post("/orders", async (req, res) => { const span = tracer.startSpan("create-order"); try { // Validate input const validationSpan = tracer.startSpan("validate-order", { attributes: { "order.items": req.body.items?.length }, }); const order = validateOrder(req.body); validationSpan.end(); // Check inventory const inventorySpan = tracer.startSpan("check-inventory"); await checkInventory(order.items); inventorySpan.end(); // Save to database const dbSpan = tracer.startSpan("save-order"); const saved = await saveOrder(order); dbSpan.setAttribute("order.id", saved.id); dbSpan.end(); // Publish event const publishSpan = tracer.startSpan("publish-order-created"); await publishOrderCreated(saved); publishSpan.end(); span.setStatus({ code: SpanStatusCode.OK }); res.json(saved); } catch (error) { span.setStatus({ code: SpanStatusCode.ERROR, message: error instanceof Error ? error.message : "Unknown error", }); span.recordException(error as Error); res.status(500).json({ error: "Failed to create order" }); } finally { span.end(); } });

Every operation gets wrapped in span creation and cleanup. The error handling has to set span status, record the exception, and still end the span. Miss any of these and you get incomplete or leaked traces.

Propagate Context Across Services

When one service calls another, you need to propagate the trace context so spans connect into a single trace:

import { context, propagation } from "@opentelemetry/api"; async function callInventoryService(items: Item[]): Promise<void> { const headers: Record<string, string> = {}; // Inject trace context into outgoing request headers propagation.inject(context.active(), headers); await fetch("http://inventory-service/check", { method: "POST", headers: { "Content-Type": "application/json", ...headers, // Includes traceparent, tracestate headers }, body: JSON.stringify({ items }), }); }

The receiving service needs to extract that context and use it as the parent for its own spans. Every service-to-service call requires this propagation.

Run a Collector

You also need something to receive the traces. Locally, that means running Jaeger or a collector:

docker run -d --name jaeger \ -p 16686:16686 \ -p 4318:4318 \ jaegertracing/all-in-one:latest

Then open http://localhost:16686 to view traces.

What This Gets You

OpenTelemetry is a solid standard and works with any backend. But the setup involves:

  • Nine npm packages
  • A tracing configuration file that must load first
  • Manual span creation around business logic
  • Explicit context propagation between services
  • A collector running alongside your app
  • Per-service instrumentation configuration

For a single service this is manageable. For a system with ten services, it's a lot of boilerplate to maintain, and any gap in instrumentation means missing data.

The Zero-Instrumentation Approach

Encore.ts takes a different approach. Instead of instrumenting your code manually, the Encore runtime traces everything automatically. The same order creation endpoint looks like this:

// orders/orders.ts import { api, APIError } from "encore.dev/api"; import { orders } from "./db"; interface Order { id: string; items: OrderItem[]; total: number; createdAt: Date; } interface CreateOrderRequest { items: OrderItem[]; } export const create = api( { expose: true, method: "POST", path: "/orders" }, async (req: CreateOrderRequest): Promise<Order> => { const validated = validateOrder(req); await checkInventory(validated.items); const order = await orders.queryRow<Order>` INSERT INTO orders (items, total) VALUES (${JSON.stringify(validated.items)}, ${validated.total}) RETURNING id, items, total, created_at as "createdAt" `; await orderCreated.publish({ orderId: order!.id }); return order!; } );

Everything is traced automatically. The trace appears in the local dashboard the moment you call this endpoint, with full timing data and request/response payloads.

When this endpoint calls another service, the trace context propagates automatically:

// orders/orders.ts import { inventory } from "~encore/clients"; async function checkInventory(items: OrderItem[]): Promise<void> { const result = await inventory.check({ items }); if (!result.available) { throw APIError.failedPrecondition("items not in stock"); } }

The call to inventory.check() is a service-to-service call. Encore tracks it as a child span in the same trace, with full request/response data, without any instrumentation on your part.

What Gets Traced Automatically

Encore's runtime instruments these operations out of the box:

  • API endpoints: request/response data, status codes, latency, including the full typed payload
  • Service-to-service calls: the complete call chain across services, with request/response data at each hop
  • Database queries: SQL statements with parameter values, execution time, rows affected
  • Pub/Sub publishing and subscriptions: topic, message data, delivery status
  • Cache operations: gets, sets, deletes with key information and hit/miss status
  • Authentication: auth handler execution, the resolved user data
  • HTTP requests: outgoing HTTP calls with URL, method, status, and timing breakdowns (DNS, TLS, time to first byte)

Each operation captures stack traces so you can click any span and see the exact line of code that triggered it.

Viewing Traces Locally

When you run encore run, a local development dashboard starts at localhost:9400. Every request to your API generates a trace you can inspect immediately.

Click any request to see a timeline of all operations: nested spans showing the call tree, request/response data for each API call, SQL queries with parameters and execution time, pub/sub messages, and errors with stack traces. For multi-service applications, the trace shows the complete request path across all services with timing for each step.

The trace viewer is built in, so there's nothing extra to run or install.

Production Tracing

Encore Cloud

When you deploy with Encore Cloud, traces are collected and stored automatically. The Trace Explorer lets you:

  • Search traces by service, endpoint, status code, or time range
  • Filter for slow requests or errors
  • View traces from any environment (production, staging, preview environments)
  • Compare traces across deployments to spot regressions

Traces from Preview Environments are included, so you can debug issues in a PR before it reaches production.

Getting Started

Install Encore and see traces in under five minutes.

# Install CLI (macOS) brew install encoredev/tap/encore # Or Linux / Windows (WSL) curl -L https://encore.dev/install.sh | bash # Create a project encore app create tracing-demo --example=ts/hello-world cd tracing-demo # Run it encore run

Make a request:

curl http://localhost:4000/hello/World

Open http://localhost:9400 and click the request. You'll see the trace with timing data, request/response payloads, and any database queries or service calls. Everything works out of the box.

To see cross-service tracing, try the multi-service starter or add a second service and call it using Encore's generated clients. The trace will show the full call chain automatically.

OpenTelemetry vs Zero-Instrumentation

OpenTelemetryEncore
SetupSDK config, exporters, collectorNone
InstrumentationManual spans + auto-instrumentation packagesAutomatic for all operations
Context propagationManual injection/extractionAutomatic across service calls
Local trace viewerRun Jaeger/Zipkin in DockerBuilt-in at localhost:9400
New service addedConfigure tracing for each serviceTraces work immediately
MaintenanceUpdate packages, fix instrumentation gapsNone

OpenTelemetry gives you flexibility and vendor choice. Encore gives you complete traces from the first line of code. If you're building with Encore, you get distributed tracing as a built-in capability rather than an integration project.


Have questions? Join our Discord community where developers help each other daily.

Ready to escape the maze of complexity?

Encore Cloud is the development platform for building robust type-safe distributed systems with declarative infrastructure.