All Integrations
Standardsstub_status + OTel module

Nginx Integration

Monitor request rates, upstream latency, active connections, and HTTP error distribution for Nginx. Add distributed tracing with the OTel module for full end-to-end visibility.

Setup

How It Works

01

Enable stub_status

Add the stub_status directive to your nginx.conf server block. This exposes active connections, accepts, handled requests, and reading/writing/waiting connection states.

02

Add OTel Module (Optional)

For distributed tracing, load the nginx-otel module. It injects trace context into upstream requests and reports per-request spans to your TigerOps OTLP endpoint.

03

Deploy the Nginx Exporter

Run the TigerOps nginx-exporter sidecar. It scrapes stub_status, parses access logs for latency percentiles and HTTP status distribution, and forwards to TigerOps.

04

Correlate Edge to Upstream

TigerOps links Nginx upstream latency spikes to traces from your backend services, making it easy to determine whether slowdowns originate at the proxy or downstream.

Capabilities

What You Get Out of the Box

Request Rate & Throughput

Requests per second, bytes in/out, and connection accept/handled rates. TigerOps tracks request volume trends and fires anomaly alerts on unexpected traffic spikes or drops.

Upstream Latency Percentiles

P50, P95, and P99 upstream response times parsed from Nginx access logs. Identify which upstream locations contribute most to tail latency.

HTTP Status Distribution

Real-time 2xx, 3xx, 4xx, and 5xx breakdown by virtual host and upstream location. TigerOps fires alerts on 5xx rate increases before they reach SLO-threatening levels.

Active Connection Tracking

Active, reading, writing, and waiting connection counts from stub_status. TigerOps alerts when connections approach worker_connections limits.

Upstream Health

Upstream peer state (up/down/unavailable), active connections per upstream, and failed request counts. Correlate upstream failures with backend service incidents.

Distributed Trace Injection

With the nginx-otel module, TigerOps captures per-request trace spans including Nginx processing time, upstream selection, and response time — linked to backend traces.

Configuration

nginx.conf — stub_status + OTel Module

Enable stub_status and optionally load the nginx-otel module for distributed tracing.

nginx.conf
# nginx.conf — TigerOps monitoring configuration

# Load OTel module for distributed tracing (nginx 1.25+ or nginx-otel package)
load_module modules/ngx_otel_module.so;

http {
  # Structured log format for latency parsing
  log_format tigerops_json escape=json
    '{'
      '"time":"$time_iso8601",'
      '"method":"$request_method",'
      '"uri":"$uri",'
      '"status":$status,'
      '"request_time":$request_time,'
      '"upstream_response_time":"$upstream_response_time",'
      '"upstream_addr":"$upstream_addr",'
      '"bytes_sent":$bytes_sent'
    '}';

  access_log /var/log/nginx/access.log tigerops_json;

  # OTel tracing configuration
  otel_exporter {
    endpoint https://ingest.atatus.net:4317;
  }
  otel_service_name "nginx-gateway";
  otel_trace on;
  otel_trace_context propagate;  # Propagate W3C traceparent

  server {
    listen 80;

    # stub_status endpoint (restrict to internal IPs)
    location /nginx_status {
      stub_status;
      allow 10.0.0.0/8;
      allow 172.16.0.0/12;
      deny all;
    }

    location / {
      proxy_pass http://upstream_backend;
      # Forward trace context to backend
      proxy_set_header traceparent $otel_trace_id;
    }
  }
}
FAQ

Common Questions

Does TigerOps support Nginx Plus with the upstream_conf API?

Yes. Nginx Plus exposes a rich JSON API at /api/ with per-upstream peer health, zone statistics, and cache metrics. TigerOps uses this API when available instead of stub_status for significantly richer upstream monitoring data.

Can TigerOps parse custom Nginx log formats?

Yes. The TigerOps Nginx exporter supports configurable log format patterns. You specify your log_format name and the exporter maps the fields to the correct metric labels. Common fields like $upstream_response_time, $request_time, and $status are supported out of the box.

Does TigerOps work with OpenResty / Lua modules?

Yes. OpenResty is Nginx-compatible and stub_status works identically. For OpenResty-specific metrics (lua_shared_dict usage, coroutine counts), TigerOps can be extended with a custom Lua script that pushes metrics to the TigerOps StatsD endpoint.

How do I add trace context propagation without the nginx-otel module?

If you cannot load the nginx-otel module, TigerOps can inject W3C traceparent headers into upstream requests using the map + set_header directives. Your backend services will receive the trace context and link their spans to the Nginx request automatically.

Can I monitor multiple Nginx instances across different servers?

Yes. Deploy one nginx-exporter per Nginx instance (or use the Kubernetes DaemonSet mode for containerized deployments). TigerOps groups them by environment, region, or custom label and supports cross-instance comparison in the dashboard.

Get Started

See Every Request Through Your Nginx Layer

Request rates, upstream latency, and distributed traces — connected to your backend services.