Distributed Tracing

Trace Every Request
Across Every Service

Follow every request through every service, database, and external dependency with zero-code auto-instrumentation. Pinpoint bottlenecks and errors in milliseconds with AI-assisted waterfall analysis.

200+
Auto-instrumented libs
<2ms
Overhead
100%
Error capture
30 days
Full retention
Trace Waterfalltrace_id: 4a8f1b2c9e3d…
9 spans1 error412ms total
Span / ServiceTimeline (0 — 412ms)
POST /api/checkout
api-gateway
412ms
validateCart()
cart-service
28ms
getUser()
user-service
22ms
SELECT * FROM users
postgres
18ms
processPayment()
payment-service
310ms
stripe.charges.create
stripe-api
280ms
createOrder()
order-service
38ms
INSERT INTO orders
postgres
32ms
sendConfirmation()
notification-svc
12ms
Error in sendConfirmation()ConnectionRefused: smtp.internal:587 — linked to 3 log lines

Complete Distributed Visibility

From auto-instrumentation to AI-powered analysis, TigerOps Traces gives you total request visibility across your entire microservices architecture.

Auto-Instrumentation

One-line agent setup automatically instruments 200+ frameworks and libraries — no code changes, no vendor lock-in.

Service Maps

Live topology maps show dependencies, error rates, and latency between every service. Spot cascade failures instantly.

Latency Analysis

Waterfall views, flame graphs, and p50/p95/p99 breakdowns reveal exactly where time is spent in every request path.

Error Correlation

Errors in traces are automatically linked to their originating log lines and the deployment that introduced them.

Trace-to-Log Linking

Jump from any span to its correlated log lines in one click. Full context without switching tools or platforms.

Intelligent Sampling

Head-based and tail-based sampling strategies keep costs predictable while ensuring 100% capture of errors and slow traces.

OpenTelemetry Native

Drop-in integrations with the tools your team already uses.

OpenTelemetrySDK
JaegerImport
ZipkinImport
Node.jsAuto
PythonAuto
JavaAuto
GoAuto
RubyAuto
.NETAuto
PHPAuto
gRPCProtocol
GraphQLProtocol

Frequently Asked Questions

Do I need to modify my application code to enable tracing?

No. The TigerOps agent uses OpenTelemetry auto-instrumentation to hook into 200+ popular frameworks and libraries at the runtime level. For Node.js, Python, Java, Go, Ruby, .NET, and PHP you simply install the agent package and it instruments your application automatically on startup — no code changes, no vendor lock-in.

How does tail-based sampling work?

With tail-based sampling the agent buffers spans in memory until a trace is complete, then decides whether to keep it based on its outcome. This means 100% of error traces and slow traces are always retained regardless of your sampling rate, while routine fast traces are sampled down to control volume and cost.

Can I import traces from Jaeger or Zipkin?

Yes. TigerOps accepts traces via the OpenTelemetry OTLP protocol and also supports the Jaeger and Zipkin wire formats natively. You can migrate from an existing Jaeger or Zipkin deployment by simply changing the exporter endpoint — no data transformation or format conversion required.

How are traces linked to logs and metrics?

Every span carries a trace ID and span ID that are automatically injected into structured log output when you use our logging integration. In the UI, clicking any span opens its correlated log lines inline. The same trace ID is also available as a dimension on metrics, so you can drill from a metric anomaly directly to representative traces.

What is the performance overhead of the tracing agent?

The TigerOps tracing agent adds less than 2 milliseconds of overhead per request and uses less than 1% additional CPU under typical load. Traces are exported asynchronously on a background thread so the critical path of your application is never blocked by telemetry export.

See Every Request. Fix Every Issue.

Stop guessing where latency hides. Start tracing in minutes with zero code changes.

No credit card required · 14-day free trial · Cancel anytime