All Integrations
Standardsvector.toml sink config

Vector Integration

Route logs and metrics from Datadog Vector to TigerOps with Rust-native performance. VRL transforms, multi-sink fan-out, and full pipeline observability — all in one config file.

Setup

How It Works

01

Add the TigerOps Sink

Add the http sink to your vector.toml configuration pointing to the TigerOps ingest endpoint. Vector handles compression, batching, and retries natively.

02

Configure Transform Pipelines

Use Vector's remap (VRL) transform to normalize field names, mask sensitive data, and enrich records before they reach TigerOps. Zero performance overhead.

03

Route Metrics via Prometheus Sink

Forward host metrics, StatsD metrics, and custom application metrics from Vector to TigerOps using the prometheus_remote_write sink for unified metric storage.

04

Monitor Pipeline Health

TigerOps ingests Vector's internal metrics (component throughput, error rates, buffer utilization) to give you full visibility into your observability pipeline health.

Capabilities

What You Get Out of the Box

Sub-millisecond Routing

Vector's Rust-native architecture delivers sub-millisecond event routing with minimal CPU and memory overhead — ideal for high-throughput log pipelines.

VRL Transform Support

Use Vector Remap Language (VRL) to reshape, filter, and enrich log events before they reach TigerOps. All TigerOps-specific field requirements are documented with ready-to-use VRL snippets.

Multi-Sink Fan-out

Route the same event stream to TigerOps, S3, and Kafka simultaneously using Vector's fan-out topology. Each sink receives an independent copy with no shared state.

Metrics + Logs in One Agent

Replace separate metric agents and log forwarders with a single Vector process. Use prometheus_scrape source + prometheus_remote_write sink for metrics, and http sink for logs.

Adaptive Request Sizing

The http sink automatically adjusts batch sizes based on throughput and latency. During traffic spikes, Vector back-pressures upstream sources instead of dropping events.

Pipeline Observability

Vector exposes its own internal metrics. TigerOps ingests component_received_events_total, component_errors_total, and buffer utilization for full pipeline health dashboards.

Configuration

vector.toml Sink Configuration

Route logs and metrics to TigerOps with VRL field normalization.

vector.toml
# vector.toml — Log and metric routing to TigerOps

[sources.kubernetes_logs]
type = "kubernetes_logs"
auto_partial_merge = true

[sources.host_metrics]
type = "host_metrics"
scrape_interval_secs = 15
collectors = ["cpu", "memory", "network", "disk"]

# Normalize log fields with VRL
[transforms.normalize_logs]
type   = "remap"
inputs = ["kubernetes_logs"]
source = '''
  .service_name = .kubernetes.pod_labels.app ?? "unknown"
  .severity = downcase(string!(.level ?? .severity ?? "info"))
  .trace_id  = .traceId ?? .trace_id ?? null
  del(.kubernetes.pod_annotations)
'''

# Forward logs to TigerOps
[sinks.tigerops_logs]
type              = "http"
inputs            = ["normalize_logs"]
uri               = "https://ingest.atatus.net/api/v1/logs"
method            = "post"
encoding.codec    = "ndjson"
compression       = "gzip"
auth.strategy     = "bearer"
auth.token        = "${TIGEROPS_API_KEY}"

[sinks.tigerops_logs.batch]
max_bytes    = 10485760  # 10MB
timeout_secs = 5

[sinks.tigerops_logs.request]
retry_attempts = 10

# Forward metrics to TigerOps via remote_write
[sinks.tigerops_metrics]
type     = "prometheus_remote_write"
inputs   = ["host_metrics"]
endpoint = "https://ingest.atatus.net/api/v1/write"
auth.strategy = "bearer"
auth.token    = "${TIGEROPS_API_KEY}"
FAQ

Common Questions

Which Vector versions does TigerOps support?

TigerOps supports Vector 0.25 and later. The http and prometheus_remote_write sinks used in the integration have been stable since Vector 0.20. We recommend running the latest stable release for the best performance.

Can Vector replace my existing Fluentd or Logstash deployment?

In many cases, yes. Vector provides source compatibility with Fluentd forward protocol and the Elasticsearch bulk API, making migration straightforward. TigerOps supports both Vector and the legacy agents during a transition period.

How do I forward internal Vector metrics to TigerOps?

Add an internal_metrics source and a prometheus_remote_write sink to your vector.toml. Tag the metrics with pipeline_name and host labels. TigerOps provides a pre-built dashboard for Vector pipeline health.

Is Vector suitable for edge and IoT deployments?

Yes. Vector's single static binary, low memory footprint (typically under 10MB idle), and filesystem buffering make it well-suited for edge devices with intermittent connectivity to TigerOps.

Can I use Vector to migrate from Datadog to TigerOps?

Yes. Vector supports a datadog_agent source that receives DogStatsD metrics and Datadog log payloads. Point your existing Datadog agents at Vector, and Vector forwards to TigerOps — enabling a parallel-run migration with zero agent changes on hosts.

Get Started

One Agent. All Your Observability Data. TigerOps Intelligence.

Replace multiple log and metric agents with Vector and TigerOps. Rust-native performance, zero lock-in.