All Integrations
ContainersTelemetry CRD + EnvoyFilter

Istio Integration

Full service mesh telemetry from Istio: per-service golden signals, mTLS certificate health, Envoy proxy metrics, and distributed traces — with zero application code changes.

Setup

How It Works

01

Apply the Telemetry CRD

Apply the TigerOps Telemetry custom resource to your Istio mesh. This configures Envoy sidecars across your mesh to export metrics and traces to TigerOps with no per-service changes.

02

Configure EnvoyFilter

Apply the TigerOps EnvoyFilter to add request ID propagation and trace sampling configuration. TigerOps uses consistent sampling across the mesh for complete distributed traces.

03

Workload Metrics Appear

Within minutes, all inter-service request rates, latency percentiles, error ratios, and mTLS certificate health appear in TigerOps — per workload, per namespace, and per service.

04

AI Traffic Analysis Activates

TigerOps AI analyzes traffic patterns across your mesh, detects abnormal error rate increases, retry storms, and circuit breaker trips, and surfaces root causes automatically.

Capabilities

What You Get Out of the Box

Golden Signal Metrics per Service

Request rate, error rate, and P99 latency per workload and per service-to-service edge. TigerOps builds a real-time service topology map from Istio traffic data.

mTLS Certificate Health

Track certificate expiry, rotation events, and mTLS policy enforcement state across all workloads. TigerOps alerts before certificate expiry causes unexpected connection failures.

Envoy Proxy Metrics

Upstream cluster health, circuit breaker state, retry counts, and connection pool utilization per Envoy sidecar. Identify which proxies are under pressure.

Distributed Traces Across Mesh

Full end-to-end traces collected from Envoy sidecars via Zipkin/OTLP. No application code changes required — trace context propagation is handled by Istio.

VirtualService & DestinationRule Monitoring

Track traffic split ratios, retry policy effectiveness, and outlier detection events from your VirtualService and DestinationRule configurations.

Multi-Cluster Service Mesh

For multi-primary and primary-remote Istio deployments: cross-cluster traffic metrics, federation health, and per-cluster workload performance.

Configuration

EnvoyFilter + Telemetry CRD

Apply these Istio CRDs to configure mesh-wide telemetry export to TigerOps.

tigerops-istio.yaml
# TigerOps Istio Telemetry CRD
# Configures all Envoy sidecars to export traces to TigerOps OTLP endpoint
apiVersion: telemetry.istio.io/v1alpha1
kind: Telemetry
metadata:
  name: tigerops-telemetry
  namespace: istio-system  # Apply mesh-wide
spec:
  tracing:
    - providers:
        - name: tigerops-otel
      randomSamplingPercentage: 10.0  # Adjust based on traffic volume
  metrics:
    - providers:
        - name: prometheus  # TigerOps scrapes Istio Prometheus endpoint

---
# Register TigerOps as an OTLP trace provider
apiVersion: v1
kind: ConfigMap
metadata:
  name: istio
  namespace: istio-system
data:
  mesh: |
    extensionProviders:
    - name: tigerops-otel
      opentelemetry:
        service: tigerops-collector.tigerops-system.svc.cluster.local
        port: 4317

---
# EnvoyFilter — Add TigerOps request ID header for correlation
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
  name: tigerops-request-id
  namespace: istio-system
spec:
  configPatches:
    - applyTo: HTTP_FILTER
      match:
        context: ANY
        listener:
          filterChain:
            filter:
              name: envoy.filters.network.http_connection_manager
      patch:
        operation: INSERT_BEFORE
        value:
          name: envoy.filters.http.header_mutation
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.filters.http.header_mutation.v3.HeaderMutation
            mutations:
              request_mutations:
                - append:
                    header:
                      key: x-tigerops-request-id
                      value: "%REQ(x-request-id)%"
FAQ

Common Questions

Which Istio versions does TigerOps support?

TigerOps supports Istio 1.17 and newer using the Telemetry API (v1alpha1 and v1beta1). For older Istio versions using MeshConfig telemetry, TigerOps provides a compatibility mode using the Prometheus scrape endpoint that Istio exposes.

Does TigerOps require changes to my application pods?

No. TigerOps configures Istio-level telemetry using CRDs (Telemetry and EnvoyFilter). The Envoy sidecar proxies export metrics and traces — your application pods need no changes, no SDK, and no sidecar additions.

Can TigerOps monitor Ambient Mesh (Istio without sidecars)?

Yes. TigerOps supports Istio Ambient Mesh via the ztunnel and waypoint proxy metric endpoints. The Telemetry CRD approach is replaced by a TigerOps DaemonSet that reads ztunnel metrics per node.

How does TigerOps handle high-cardinality service mesh metrics?

TigerOps applies cardinality reduction by default: it aggregates per-pod metrics to per-workload level and uses metric relabeling to drop high-cardinality labels like pod name from mesh metrics. You can configure per-workload granularity for critical services.

Can TigerOps detect retry storms in the service mesh?

Yes. TigerOps monitors Envoy retry counts per upstream cluster and correlates them with the upstream error rates. When retry amplification creates a feedback loop, TigerOps detects the pattern and fires a retry storm alert with the affected service graph.

Get Started

Make Your Service Mesh Observable

Golden signals per service, mTLS health, and distributed traces from Istio — with two kubectl apply commands.