Ent Historical Metrics
Last synchronized at 2026-01-12T22:47:32Z
Sidekiq Enterprise has the ability to send queue processing metrics to Statsd for dashboards and historical reporting.
Configuration
Configure a Statsd client
See the [[Pro Metrics]] page for how to tell Sidekiq about your Statsd server.
Enable History
In your initializer, add this:
Sidekiq.configure_server do |config|
# history is captured every 30 seconds by default
config.retain_history(30)
end
Metrics
Sidekiq Enterprise sends the following metrics:
- sidekiq.processed - Number of job executions completed (success or failure)
- sidekiq.failures - Number of job executions which raised an error
- sidekiq.enqueued - Total Size of all known queues
- sidekiq.retries - Total Retries Size
- sidekiq.dead - Total Dead Size
- sidekiq.scheduled - Total Scheduled Size
- sidekiq.busy - Total Busy Size
- sidekiq.queue.size (with tag
queue:#{x}) - Current Size of queue x - sidekiq.queue.latency (with tag
queue:default) - Latency of the Default Queue
Namespace
Any namespace passed to the client configuration will be prepended to each metric, e.g. this code will create metrics which look like “myapp.sidekiq.busy” tagged with “service:sidekiq”:
Sidekiq.configure_server do |config|
config.dogstatsd = -> {
Datadog::Statsd.new("localhost", 8125,
namespace: Rails.application.name,
tags: ["service:sidekiq", "env:#{config[:environment]}"])
}
end
Where possible we recommend reading and following the Datadog best practices for tagging.
Custom
Notice above that latency is not gathered for every queue because it is a relatively expensive operation. You can add custom history metrics by passing a block to retain_history which collects more metrics, including any further important queue latencies. Here we are adding the latency for the bulk and critical queues:
Sidekiq.configure_server do |config|
config.retain_history(30) do |s|
s.batch do |b|
%w(bulk critical).each do |qname|
q = Sidekiq::Queue.new(qname)
b.gauge("sidekiq.queue.latency", q.latency, tags: ["queue:#{qname}"])
end
end
end
end
The block will be passed a Statsd instance which quacks like the normal statsd client, e.g. dogstatsd-ruby.
- API
- Active-Job
- Advanced-Options
- Batches
- Best-Practices
- Build-vs-Buy
- Bulk-Queueing
- Comm-Installation
- Commercial-FAQ
- Commercial-Support
- Commercial-collaboration
- Complex-Job-Workflows-with-Batches
- Delayed-extensions
- Deployment
- Devise
- Embedding
- Ent-Encryption
- Ent-Historical-Metrics
- Ent-Leader-Election
- Ent-Multi-Process
- Ent-Periodic-Jobs
- Ent-Rate-Limiting
- Ent-Rolling-Restarts
- Ent-Unique-Jobs
- Ent-Web-UI
- Error-Handling
- FAQ
- Getting-Started
- Heroku
- Home
- Iteration
- Job-Format
- Job-Lifecycle
- Kubernetes
- Logging
- Memory
- Metrics
- Middleware
- Miscellaneous-Features
- Monitoring
- Pro-API
- Pro-Expiring-Jobs
- Pro-Metrics
- Pro-Reliability-Client
- Pro-Reliability-Server
- Pro-Web-UI
- Problems-and-Troubleshooting
- Profiling
- Really-Complex-Workflows-with-Batches
- Related-Projects
- Reliability
- Scaling
- Scheduled-Jobs
- Sharding
- Signals
- Testimonials
- Testing
- The-Basics
- Using-Dragonfly
- Using-Redis