When a request enters your Go API, it might call two other services, run a database query, and publish a message to a queue. If any of that is slow or broken, you need to see exactly where the problem is. Distributed tracing gives you that visibility: a single timeline showing every service call, every database query, and where the time went.
This guide covers two approaches. First, the manual approach using the OpenTelemetry Go SDK. Then, we'll look at how modern frameworks can eliminate most of this instrumentation work entirely, giving you full tracing from the first request without writing any tracing code.
A trace follows a single request across every service, database call, and external dependency it touches. Each operation becomes a span with a name, start time, and duration. Spans nest to form a tree that represents the complete call chain.
Trace ID: a1b2c3d4e5f6
orders.GetOrder [============================] 95ms
users.GetProfile [=====] 12ms
db.query [===] 7ms
inventory.CheckStock [=======] 28ms
db.query [====] 15ms
db.query [===] 9ms
From this you can see that inventory.CheckStock takes the most time and the database query inside it is the bottleneck. Without tracing, debugging a distributed system means correlating timestamps across log files from different services. With tracing, you click a request and see everything.
OpenTelemetry is the CNCF standard for distributed tracing. It's vendor-neutral, well-supported across languages, and works with all major observability backends (Jaeger, Zipkin, Grafana Tempo, Datadog). The tradeoff is the amount of setup and instrumentation code required.
go get go.opentelemetry.io/otel \ go.opentelemetry.io/otel/sdk \ go.opentelemetry.io/otel/sdk/trace \ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp \ go.opentelemetry.io/otel/propagation \ go.opentelemetry.io/otel/semconv/v1.26.0 \ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp \ go.opentelemetry.io/contrib/instrumentation/database/sql/otelsql
That's eight packages before you write a line of application code, and Go's module system means each one pulls in its own dependency tree.
Create a tracing setup function that initializes the SDK on startup:
// tracing.go
package main
import (
"context"
"time"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
)
func initTracer(ctx context.Context, serviceName string) (func(), error) {
exporter, err := otlptracehttp.New(ctx,
otlptracehttp.WithEndpointURL("http://localhost:4318/v1/traces"),
)
if err != nil { return nil, err }
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter, sdktrace.WithBatchTimeout(5*time.Second)),
sdktrace.WithResource(resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
)),
)
otel.SetTracerProvider(tp)
otel.SetTextMapPropagator(propagation.TraceContext{})
return func() { _ = tp.Shutdown(ctx) }, nil
}
This must run before your HTTP server starts:
func main() {
ctx := context.Background()
shutdown, err := initTracer(ctx, "orders-service")
if err != nil {
log.Fatalf("failed to init tracer: %v", err)
}
defer shutdown()
// Start HTTP server...
}
Auto-instrumentation packages handle HTTP and SQL calls, but your business logic needs manual spans:
var tracer = otel.Tracer("orders-service")
func getOrderHandler(w http.ResponseWriter, r *http.Request) {
ctx, span := tracer.Start(r.Context(), "GetOrder")
defer span.End()
orderID := r.PathValue("id")
span.SetAttributes(attribute.String("order.id", orderID))
// Each downstream call needs its own span
userCtx, userSpan := tracer.Start(ctx, "FetchUserProfile")
user, err := fetchUserProfile(userCtx, orderID)
userSpan.End()
if err != nil {
span.SetStatus(codes.Error, err.Error())
span.RecordError(err)
http.Error(w, "failed to fetch user", http.StatusInternalServerError)
return
}
invCtx, invSpan := tracer.Start(ctx, "CheckInventory")
stock, err := checkInventory(invCtx, orderID)
invSpan.End()
if err != nil {
span.SetStatus(codes.Error, err.Error())
span.RecordError(err)
http.Error(w, "failed to check inventory", http.StatusInternalServerError)
return
}
dbCtx, dbSpan := tracer.Start(ctx, "QueryOrderDB")
order, err := queryOrder(dbCtx, orderID)
dbSpan.End()
if err != nil {
span.SetStatus(codes.Error, err.Error())
span.RecordError(err)
http.Error(w, "order not found", http.StatusNotFound)
return
}
order.User = user
order.InStock = stock
span.SetStatus(codes.Ok, "")
json.NewEncoder(w).Encode(order)
}
Every operation gets wrapped in span creation and cleanup. Error handling has to set span status, record the error, and end the span in every code path. Miss any step and you get incomplete or leaked traces.
When one service calls another, you need to propagate trace context through HTTP headers so spans connect into a single trace:
func fetchUserProfile(ctx context.Context, userID string) (*User, error) {
req, err := http.NewRequestWithContext(ctx, "GET",
"http://users-service/profiles/"+userID, nil)
if err != nil {
return nil, err
}
// Inject trace context into outgoing request headers
otel.GetTextMapPropagator().Inject(ctx, propagation.HeaderCarrier(req.Header))
resp, err := http.DefaultClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var user User
err = json.NewDecoder(resp.Body).Decode(&user)
return &user, err
}
The receiving service needs to extract that context and use it as the parent for its own spans. One missing propagation.Inject call breaks the trace for everything downstream.
You also need something to receive the traces. Locally, that means running Jaeger:
docker run -d --name jaeger \ -p 16686:16686 \ -p 4318:4318 \ jaegertracing/all-in-one:latest
OpenTelemetry is a solid standard and works with any backend. But the setup involves eight Go modules, a startup configuration function, manual span creation around business logic, explicit context propagation between services, a running collector, and per-service tracer configuration. For a single service this is manageable. For a system with ten services, it's significant boilerplate, and any gap in instrumentation means missing data.
Encore.go takes a different approach. Because the framework's Rust-based runtime owns the transport layer between services, it traces everything automatically. You write application logic and the traces appear.
Here's the same order retrieval endpoint:
// orders/orders.go
package orders
import (
"context"
"encore.dev/storage/sqldb"
"encore.app/users"
)
type Order struct {
ID string `json:"id"`
UserID string `json:"userID"`
Product string `json:"product"`
Quantity int `json:"quantity"`
Total int `json:"total"`
UserName string `json:"userName"`
}
var db = sqldb.NewDatabase("orders", sqldb.DatabaseConfig{
Migrations: "./migrations",
})
//encore:api public method=GET path=/orders/:id
func GetOrder(ctx context.Context, id string) (*Order, error) {
row := db.QueryRow(ctx,
"SELECT id, user_id, product, quantity, total FROM orders WHERE id = $1", id)
var order Order
err := row.Scan(&order.ID, &order.UserID, &order.Product, &order.Quantity, &order.Total)
if err != nil {
return nil, err
}
// Call the users service (traced automatically)
profile, err := users.GetProfile(ctx, order.UserID)
if err != nil {
return nil, err
}
order.UserName = profile.Name
return &order, nil
}
Everything is traced out of the box. The //encore:api annotation tells Encore this is an API endpoint, and the call to users.GetProfile is a service-to-service call that Encore traces automatically. The database query through sqldb appears as a child span with the SQL statement and execution time. There's no tracer to configure, no spans to manage, and no context to propagate manually.
The users service is a separate Go package with its own //encore:api endpoint and database. When orders.GetOrder calls users.GetProfile, Encore threads the trace context through its RPC layer automatically. The resulting trace shows GetOrder as the root span, with GetProfile as a child span and database queries nested inside each.
Pub/Sub works the same way. Declare a topic and subscriber, and traces cover the full lifecycle:
var orderTopic = pubsub.NewTopic[*OrderEvent]("order-created", pubsub.TopicConfig{
DeliveryGuarantee: pubsub.AtLeastOnce,
})
// Publishing is traced automatically
_, err = orderTopic.Publish(ctx, &OrderEvent{OrderID: order.ID, UserID: req.UserID})
// Subscriber in another service. Delivery is linked back to the publish span.
var _ = pubsub.NewSubscription(orders.OrderTopic, "send-notification",
pubsub.SubscriptionConfig[*orders.OrderEvent]{
Handler: HandleOrderCreated,
},
)
The publish, delivery, and handler execution all appear as connected spans in the same trace without any instrumentation code.
Encore's runtime instruments these operations out of the box:
Each operation captures stack traces so you can click any span and see the exact line of code that triggered it.
When you run encore run, a local development dashboard starts at http://localhost:9400. Every request generates a trace you can inspect immediately. Click any request to see the full timeline: nested spans showing the call tree, request/response data for each API call, SQL queries with execution time, Pub/Sub messages, and errors with stack traces.
The trace viewer is built in -- there's no collector to run and no Docker containers to manage. The context.Context you already pass through your Go code is all Encore needs to correlate spans.
When you deploy with Encore Cloud, traces are collected and stored automatically. The Trace Explorer lets you search traces by service, endpoint, status code, or time range. Filter for slow requests or errors. View traces from any environment including Preview Environments, so you can debug issues in a PR before merging.
For self-hosted deployments, build a Docker image and run it anywhere:
encore build docker myapp:latest
Self-hosted deployments benefit from Encore's automatic instrumentation at the runtime level. To view and query traces, use Encore Cloud.
Install Encore and see traces in under five minutes.
# Install CLI (macOS)
brew install encoredev/tap/encore
# Or Linux / Windows (WSL)
curl -L https://encore.dev/install.sh | bash
# Create a Go project
encore app create tracing-demo --example=go/hello-world
cd tracing-demo
# Run it
encore run
Make a request:
curl http://localhost:4000/hello/World
Open http://localhost:9400 and click the request. You'll see the trace with timing data, request/response payloads, and any database queries or service calls. To see cross-service tracing, add a second service in a new Go package and call it from the first. The trace will show the full call chain automatically.
| OpenTelemetry Go SDK | Encore.go | |
|---|---|---|
| Setup | SDK config, exporters, collector | None |
| Dependencies | 8+ modules with transitive deps | Framework built-in |
| Instrumentation | Manual spans + otelhttp/otelsql wrappers | Automatic for all operations |
| Context propagation | Manual inject/extract via HTTP headers | Automatic across service calls |
| Pub/Sub tracing | Custom instrumentation per broker | Built-in for topics and subscriptions |
| Local trace viewer | Run Jaeger or Zipkin in Docker | Built-in at localhost:9400 |
| New service added | Configure tracing for each service | Traces work immediately |
| Maintenance | Update packages, fix instrumentation gaps | None |
| Vendor flexibility | Works with any OTel-compatible backend | Built-in trace viewer in Encore Cloud |
OpenTelemetry gives you full control over what to instrument and where to send data. Encore gives you complete traces from the first line of code without touching the tracing layer. If you're building with Encore, distributed tracing is a built-in capability rather than an integration project.
Encore's built-in trace viewer -- both locally and in Encore Cloud -- means you get full observability without needing to set up a separate tracing backend.
Have questions? Join our Discord community where developers help each other daily.