Sharding
Last synchronized at 2026-01-12T22:47:32Z
Available in Sidekiq 3.0+
Sidekiq has a scalability limit: normally it can only use a single Redis server. This usually isn’t a problem: Redis is really fast and on good hardware you can pump ~25,000 jobs/sec through Redis before you’ll start to hit the ceiling. For people who want to go beyond that limit you have two choices:
- Break your application into smaller applications. You can use one Redis server per application.
- Shard your application’s jobs across many Redis instances.
How to Shard
The latter option is reasonably simple with the Sidekiq::Client updates in Sidekiq 3.0:
REDIS_A = ConnectionPool.new { Redis.new(...) }
REDIS_B = ConnectionPool.new { Redis.new(...) }
# To create a new connection pool for a namespaced Sidekiq worker:
ConnectionPool.new do
client = Redis.new(:url => "Your Redis Url")
Redis::Namespace.new("Your Namespace", :redis => client)
end
# Create a job in the default redis instance
SomeWorker.perform_async
# Push a job to REDIS_A using the low-level Client API
client = Sidekiq::Client.new(REDIS_A)
client.push(...)
client.push_bulk(...)
Sidekiq::Client.via(REDIS_B) do
# All jobs defined within this block will go to B
SomeWorker.perform_async
end
Sidekiq API
You can use sidekiq/api with any shard by wrapping your API usage within a Sidekiq::Client.via block which pre-selects the pool for that shard:
> Sidekiq::Queue.all
=> [#<Sidekiq::Queue:0x0000000108597af0 @name="default", @rname="queue:default">]
> Sidekiq::Client.via(POOL) { Sidekiq::Queue.all }
=> []
Limitations
Sharding comes with some limitations:
- The OSS Web UI is limited to the default connection pool. Sidekiq Pro supports multiple shards in the same process, see Pro-Web-UI.
- You need to spin up Sidekiq processes to execute jobs from each Redis instance. A Sidekiq process only executes jobs from a single Redis instance.
- Sharding increases the complexity of your system. It’s harder to mentally track which jobs are going where, making debugging harder.
I don’t recommend sharding unless all other options are unavailable.
Previous: [[Testing]] Next: [[Problems and Troubleshooting]]
- API
- Active-Job
- Advanced-Options
- Batches
- Best-Practices
- Build-vs-Buy
- Bulk-Queueing
- Comm-Installation
- Commercial-FAQ
- Commercial-Support
- Commercial-collaboration
- Complex-Job-Workflows-with-Batches
- Delayed-extensions
- Deployment
- Devise
- Embedding
- Ent-Encryption
- Ent-Historical-Metrics
- Ent-Leader-Election
- Ent-Multi-Process
- Ent-Periodic-Jobs
- Ent-Rate-Limiting
- Ent-Rolling-Restarts
- Ent-Unique-Jobs
- Ent-Web-UI
- Error-Handling
- FAQ
- Getting-Started
- Heroku
- Home
- Iteration
- Job-Format
- Job-Lifecycle
- Kubernetes
- Logging
- Memory
- Metrics
- Middleware
- Miscellaneous-Features
- Monitoring
- Pro-API
- Pro-Expiring-Jobs
- Pro-Metrics
- Pro-Reliability-Client
- Pro-Reliability-Server
- Pro-Web-UI
- Problems-and-Troubleshooting
- Profiling
- Really-Complex-Workflows-with-Batches
- Related-Projects
- Reliability
- Scaling
- Scheduled-Jobs
- Sharding
- Signals
- Testimonials
- Testing
- The-Basics
- Using-Dragonfly
- Using-Redis