All Integrations
StandardsRemote Write & Scraping

Prometheus Integration

Use TigerOps as a Prometheus-compatible long-term backend. Remote-write from your existing Prometheus servers, scrape exporters directly, and get AI-powered anomaly detection on all your metrics.

Setup

How It Works

01

Add Remote Write Config

In your prometheus.yml, add a remote_write block pointing to TigerOps. Your existing Prometheus setup doesn't change — TigerOps just receives a copy.

02

Metrics Flow to TigerOps

All scraped metrics are forwarded to TigerOps in real time. No data is lost — TigerOps stores all metrics with 13-month retention regardless of your local Prometheus retention.

03

AI Baselines Your Stack

TigerOps ingests your Prometheus metrics and builds per-metric, per-label behavioral baselines. Dynamic anomaly thresholds are calculated automatically — no manual alert tuning.

04

Query with PromQL

TigerOps exposes a PromQL-compatible query endpoint. Your existing Grafana dashboards, alerting rules, and query scripts work without modification.

Capabilities

What You Get Out of the Box

Prometheus Remote Write

Add a single remote_write block to your prometheus.yml and all metrics flow to TigerOps in real time — no agent, no code changes, no downtime.

Exporter Scraping

TigerOps can directly scrape any Prometheus exporter endpoint — node_exporter, blackbox_exporter, postgres_exporter, and more — without a Prometheus server.

PromQL Compatibility

Query all your metrics using PromQL from TigerOps dashboards or via the HTTP API. Existing Grafana datasources can be pointed at TigerOps directly.

13-Month Retention

TigerOps stores all Prometheus metrics for 13 months — eliminating the need to run expensive long-term storage solutions like Thanos or Cortex yourself.

AI Anomaly Detection

On top of your existing Prometheus data, the AI SRE builds dynamic baselines and fires incidents when metrics deviate — far more accurate than static alert thresholds.

Recording Rule Migration

Import your existing Prometheus recording rules and alerting rules into TigerOps. Rules execute in the cloud against your metric data — no Prometheus server required.

Configuration

Remote Write Config

Add this block to your prometheus.yml to start forwarding metrics to TigerOps.

prometheus.yml
# Add to your existing prometheus.yml

global:
  scrape_interval: 15s
  evaluation_interval: 15s
  external_labels:
    cluster: 'production'
    region: 'us-east-1'

# --- Add this block to forward to TigerOps ---
remote_write:
  - url: "https://metrics.us1.tigerops.io/api/v1/prometheus/write"
    remote_timeout: 30s
    queue_config:
      capacity: 10000
      max_samples_per_send: 5000
      batch_send_deadline: 5s
    write_relabel_configs:
      # Optional: drop high-cardinality metrics before forwarding
      - source_labels: [__name__]
        regex: 'go_.*'
        action: drop
    headers:
      Authorization: "Bearer ${TIGEROPS_API_KEY}"
    tls_config:
      insecure_skip_verify: false

# Your existing scrape_configs remain unchanged
scrape_configs:
  - job_name: 'node'
    static_configs:
      - targets: ['localhost:9100']
FAQ

Common Questions

Does TigerOps replace Prometheus?

Not necessarily. You can keep your existing Prometheus servers exactly as they are and use remote_write to forward a copy of all metrics to TigerOps. This gives you TigerOps's long-term retention and AI analysis while keeping your local Prometheus for low-latency alerting.

Is TigerOps PromQL-compatible?

Yes. TigerOps exposes a PromQL-compatible HTTP query API. You can use it as a Grafana datasource, run queries via the Prometheus HTTP API spec, and most PromQL expressions work without modification.

How does TigerOps handle high-cardinality metrics?

TigerOps uses an efficient columnar storage engine optimized for high-cardinality label sets. For extremely high cardinality series (millions of unique label combinations), TigerOps provides cardinality analysis tools to help you identify and reduce cardinality before it impacts performance.

Can I scrape exporters without running a Prometheus server?

Yes. TigerOps includes a managed scraper that can poll any HTTP endpoint exposing Prometheus text exposition format. Configure scrape targets in the TigerOps UI and data flows in without needing to run Prometheus yourself.

What happens if the remote_write connection drops?

Prometheus queues samples locally in a WAL (write-ahead log) and retries failed remote_write requests automatically. TigerOps remote_write endpoints are highly available — but even if a connection drops, no data is lost once the connection recovers.

Get Started

Supercharge Your Prometheus Setup

No credit card required. Keep your existing setup. Add AI on top.