All Integrations
MessagingServer Middleware + Stats API

Sidekiq Integration

Monitor job queue sizes, worker thread utilization, retry and dead job counts, and distributed job traces across your Sidekiq deployment. Full visibility for Ruby background processing.

Setup

How It Works

01

Add the Gem

Add tigerops-sidekiq to your Gemfile. The gem installs a Sidekiq server middleware and uses Sidekiq::Stats to collect queue, worker, and lifecycle metrics with zero performance overhead.

02

Configure the Initializer

Create config/initializers/tigerops.rb with your API key. The initializer registers the middleware chain and starts the background stats thread that pushes metrics to TigerOps every 15 seconds.

03

Define Queue Alert Thresholds

Set per-queue depth thresholds and global retry queue limits in the TigerOps dashboard. TigerOps uses AI baselines to adapt thresholds to your traffic patterns automatically.

04

Trace Jobs From Web Requests

TigerOps propagates the parent trace ID from your Rails or Rack request into the Sidekiq job payload. Every job execution is linked back to the request that enqueued it.

Capabilities

What You Get Out of the Box

Queue Depth & Latency

Queue size and latency (time since oldest job was enqueued) for every named queue. TigerOps alerts when latency exceeds your SLO, not just when depth is high.

Retry Queue Monitoring

Track retry count, next retry time, and failure class distribution in the retry set. TigerOps surfaces which worker classes are generating the most retries for targeted debugging.

Dead Job Accumulation

Monitor dead job count and growth rate. TigerOps alerts when dead jobs accumulate faster than your team can investigate, and groups them by worker class and error type.

Worker Thread Utilization

Busy vs. idle thread counts across all Sidekiq processes. Detect thread pool saturation and scale workers proactively before jobs pile up in queues.

Job Duration Histograms

P50, P95, and P99 job execution time per worker class. Identify which jobs are slowest and correlate duration regressions with code deploys or external API degradation.

Enqueue Rate by Class

Jobs enqueued per second per worker class. Detect enqueue rate spikes caused by runaway loops or upstream service events before they overwhelm your worker pool.

Configuration

sidekiq.rb Initializer

Add the TigerOps gem to your Gemfile and configure the initializer to instrument Sidekiq with one block.

config/initializers/tigerops.rb
# Gemfile
gem "tigerops-ruby", "~> 2.0"

# config/initializers/tigerops.rb
require "tigerops/sidekiq"

TigerOps.configure do |config|
  config.api_key  = ENV.fetch("TIGEROPS_API_KEY")
  config.endpoint = "https://ingest.atatus.net/api/v1/write"
  config.service_name = "my-rails-app"
  config.environment  = Rails.env

  # Sidekiq-specific settings
  config.sidekiq.enabled            = true
  config.sidekiq.trace_jobs         = true   # Distributed trace propagation
  config.sidekiq.capture_job_args   = false  # Set true to include args in traces (sanitize PII first)
  config.sidekiq.stats_interval     = 15     # Push queue/worker stats every N seconds
  # Alert thresholds (overridden by TigerOps AI baselines if not set)
  config.sidekiq.queue_depth_alert  = 500    # Alert when any queue exceeds 500 jobs
  config.sidekiq.retry_depth_alert  = 100    # Alert when retry set exceeds 100 jobs
  config.sidekiq.dead_depth_alert   = 50     # Alert when dead set exceeds 50 jobs
end

# Sidekiq server configuration (sidekiq.rb or config/sidekiq.yml)
Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    chain.add TigerOps::Sidekiq::ServerMiddleware
  end
end

Sidekiq.configure_client do |config|
  config.client_middleware do |chain|
    chain.add TigerOps::Sidekiq::ClientMiddleware
  end
end
FAQ

Common Questions

Does TigerOps support Sidekiq Pro and Sidekiq Enterprise?

Yes. TigerOps supports Sidekiq OSS, Sidekiq Pro (including Batches and reliable fetch metrics), and Sidekiq Enterprise (rate limiting, leader election, and multi-process metrics). The gem auto-detects which edition is running and enables the appropriate metric collectors.

Can TigerOps monitor multiple Sidekiq processes across different servers?

Yes. All Sidekiq processes sharing the same Redis instance are automatically discovered. TigerOps aggregates queue metrics across all processes and provides per-process worker thread visibility for capacity planning.

How does distributed tracing work with Sidekiq jobs?

TigerOps injects the W3C TraceContext traceparent into the Sidekiq job payload hash when the job is enqueued. The server middleware extracts this on the worker side and creates a linked child span, preserving the full request-to-job trace.

Does TigerOps support Sidekiq Cron (sidekiq-cron gem)?

Yes. TigerOps monitors scheduled jobs created with sidekiq-cron and sidekiq-scheduler. It tracks last execution time, missed runs, and execution duration per cron job, alerting when a scheduled job fails silently.

Is there any performance overhead from the TigerOps Sidekiq middleware?

The overhead is less than 0.5ms per job execution. The middleware records the start/end timestamp and job metadata synchronously, then batches metric writes asynchronously on a background thread. No disk I/O occurs on the job execution path.

Get Started

Full Visibility Into Your Sidekiq Infrastructure

No credit card required. Connect in minutes. Queue depths, retry counts, and job traces immediately.