A user reports that a page loads slowly. You check the logs and see the request took 1.2 seconds, but you have no idea why. Was it the database query? The call to Stripe? The cache miss? Request tracing answers that question by showing you every operation a request triggers, how long each one took, and how they relate to each other.
This guide walks through four approaches to request tracing in Node.js, from simple logging to fully automatic instrumentation. We'll start with the manual approaches so you understand the fundamentals, then look at how modern frameworks can handle all of this for you.
When an HTTP request hits your server, it usually triggers a chain of operations: a database lookup, a cache check, a call to an external API, maybe a second database query to store the result. Request tracing captures that chain as a structured timeline, so instead of scattered log lines you get a single view of everything that happened during one request.
A trace consists of spans. Each span represents one operation: the incoming HTTP request is the root span, and everything it triggers becomes a child span nested underneath it. A database query is a span. An outbound HTTP call is a span. You end up with a tree that shows what happened, in what order, and how long each step took.
This is useful for any backend, not just microservices. Even a single-service application that talks to a database and an external API benefits from seeing the timing breakdown of each request.
The simplest approach is middleware that logs when a request starts and finishes. In Express, that looks like this:
import express from "express";
const app = express();
// Request logging middleware
app.use((req, res, next) => {
const start = Date.now();
res.on("finish", () => {
const duration = Date.now() - start;
console.log(
`${req.method} ${req.path} ${res.statusCode} ${duration}ms`
);
});
next();
});
app.get("/users/:id", async (req, res) => {
const user = await db.query("SELECT * FROM users WHERE id = $1", [
req.params.id,
]);
const orders = await fetch(
`https://orders-api.example.com/users/${req.params.id}`
);
res.json({ user: user.rows[0], orders: await orders.json() });
});
app.listen(3000);
Your logs now show:
GET /users/42 200 847ms
You know the request took 847ms, but you don't know where the time went. Was it the database query? The external API call? Both? To find out, you need to instrument individual operations.
You can wrap each operation in a timer to get a breakdown:
app.get("/users/:id", async (req, res) => {
const timings: Record<string, number> = {};
// Time the database query
let start = Date.now();
const user = await db.query("SELECT * FROM users WHERE id = $1", [
req.params.id,
]);
timings["db.getUser"] = Date.now() - start;
// Time the external API call
start = Date.now();
const orders = await fetch(
`https://orders-api.example.com/users/${req.params.id}`
);
const ordersData = await orders.json();
timings["api.getOrders"] = Date.now() - start;
console.log("Request timings:", timings);
// { "db.getUser": 23, "api.getOrders": 812 }
res.json({ user: user.rows[0], orders: ordersData });
});
Now you can see the external API call is the bottleneck. But this approach has problems. The timing code is tangled into your business logic. Every new database query or API call needs its own wrapper. You'll inevitably miss some operations, and the ones you do instrument add clutter to every handler. It also doesn't capture the parent-child relationship between operations. You just get a flat list of timings.
OpenTelemetry (OTel) is the industry standard for distributed tracing. It gives you proper spans with parent-child relationships, timing, and metadata. The setup requires a few packages and some initialization code (see our complete OTel setup guide for the full walkthrough):
// tracing.ts - must be imported before anything else
import { NodeSDK } from "@opentelemetry/sdk-node";
import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-node";
import { HttpInstrumentation } from "@opentelemetry/instrumentation-http";
import { ExpressInstrumentation } from "@opentelemetry/instrumentation-express";
import { PgInstrumentation } from "@opentelemetry/instrumentation-pg";
const sdk = new NodeSDK({
traceExporter: new ConsoleSpanExporter(),
instrumentations: [
new HttpInstrumentation(),
new ExpressInstrumentation(),
new PgInstrumentation(),
],
});
sdk.start();
With that initialized, HTTP requests and PostgreSQL queries are traced automatically. But for custom operations like business logic or calls to services that don't have an OTel instrumentation library, you still need manual spans:
import { trace } from "@opentelemetry/api";
const tracer = trace.getTracer("my-app");
app.get("/users/:id", async (req, res) => {
const user = await db.query("SELECT * FROM users WHERE id = $1", [
req.params.id,
]);
// Manual span for custom logic
const enrichedUser = await tracer.startActiveSpan(
"enrichUserProfile",
async (span) => {
try {
const profile = await fetchProfile(user.rows[0].id);
span.setAttribute("profile.source", "cache");
return { ...user.rows[0], ...profile };
} finally {
span.end();
}
}
);
res.json(enrichedUser);
});
OTel is a significant improvement. You get real spans, proper parent-child relationships, and automatic instrumentation for common libraries. The tradeoff is setup complexity: you need to pick the right SDK packages, configure exporters (Jaeger, Zipkin, or a vendor), install instrumentation libraries for each dependency, and make sure the tracing initialization runs before your application code loads. For a production setup, you'll also want sampling, batching, and an exporter pointed at a backend like Jaeger or Grafana Tempo.
Encore.ts takes a different approach. Tracing is built into the runtime, so every operation is captured automatically without setup code or instrumentation libraries. The same endpoint from above looks like this:
import { api } from "encore.dev/api";
import { SQLDatabase } from "encore.dev/storage/sqldb";
const db = new SQLDatabase("users", {
migrations: "./migrations",
});
interface User {
id: string;
name: string;
email: string;
}
interface UserProfile {
user: User;
orderCount: number;
}
export const getUser = api(
{ expose: true, method: "GET", path: "/users/:id" },
async ({ id }: { id: string }): Promise<UserProfile> => {
const user = await db.queryRow<User>`
SELECT id, name, email FROM users WHERE id = ${id}
`;
if (!user) {
throw new Error("user not found");
}
const stats = await db.queryRow<{ count: number }>`
SELECT COUNT(*) as count FROM orders WHERE user_id = ${id}
`;
return { user, orderCount: stats?.count ?? 0 };
}
);
Tracing is built in from the start. Encore's Rust-based runtime captures every database query, every API call between services, every HTTP request, and every cache operation as structured trace spans. Request and response payloads are included too, so you can see the exact data that flowed through each step.
This works because Encore understands your infrastructure declarations. It knows db is a PostgreSQL database and automatically instruments every query. If you add Pub/Sub or more services, those appear in traces too. Nothing to configure.
A trace displays as a waterfall of spans, each indented to show the parent-child relationship. For the endpoint above, you'd see something like this:
GET /users/42 total: 34ms
├── Query: SELECT id, name, email ... 12ms
└── Query: SELECT COUNT(*) ... 8ms
The root span is the HTTP request. Each child span shows the operation type, the actual query or URL, and its duration. If a database query is slow, you see it immediately. If one operation waits on another, the waterfall makes that obvious.
For a more complex request that involves multiple services, caching, and external calls, the trace might look like:
POST /checkout total: 245ms
├── Query: SELECT * FROM carts ... 15ms
├── Cache: Get pricing:user:42 2ms
├── Call: payments.Charge 180ms
│ ├── POST https://api.stripe.com/... 162ms
│ └── Query: INSERT INTO payments ... 9ms
├── Publish: order.completed 3ms
└── Query: UPDATE carts SET status ... 11ms
Every operation appears in the trace regardless of which service handled it. You can see that the Stripe call dominates the response time, and you didn't have to add a single line of instrumentation code to get that information.
Tracing is most useful during development, not just in production. When you run an Encore application locally with encore run, every request is traced and viewable in the local development dashboard at localhost:9400.
encore run
Open http://localhost:9400 in your browser. The dashboard shows a list of recent requests. Click any request to see its full trace waterfall with timing, request/response data, and any errors.
This changes how you debug. Instead of adding console.log statements and restarting your server, you make a request and look at the trace. You can see the exact SQL query that ran, how long it took, and what it returned. If a request fails, the trace shows which operation threw the error and the full stack trace.
The dashboard also lets you call your API endpoints directly, so you can test and trace requests from one interface without switching to curl or Postman.
Install the Encore CLI and create a project:
# macOS
brew install encoredev/tap/encore
# Linux
curl -L https://encore.dev/install.sh | bash
# Windows
iwr https://encore.dev/install.ps1 | iex
Create a new application and start the development server:
encore app create my-app --example=ts/hello-world
cd my-app
encore run
Open http://localhost:9400, make a request to your API, and click on it to see the trace. Every endpoint you add from here on is automatically traced, including any database queries, cache operations, or service-to-service calls.
| Approach | Setup | What's Captured | Timing Breakdown | Production-Ready |
|---|---|---|---|---|
| Basic logging | Minimal | Request/response only | Total duration | Limited |
| Manual timers | Per-operation | What you remember to wrap | Flat list | Fragile |
| OpenTelemetry | Moderate | Libraries with instrumentation + manual spans | Span tree | Yes, with configuration |
| Encore | None | Everything automatically | Full waterfall | Yes, built-in |
The manual approaches work for quick debugging but don't scale. OpenTelemetry is comprehensive but requires ongoing maintenance as you add dependencies. Encore captures everything by default because it understands your infrastructure from your code.