What is RabbitMQ Prefetch Count?
If you are just getting started you might be wondering what RabbitMQ prefetch count is? It is a common setting to come across and it is something you would probably want to be familiar with.
Answer: RabbitMQ prefetch count is a variable that limits the number of unacknowledged messages that can be consumed. This is used for QOS. It is defined using the prefetch_count variable. It is unlimited by default
Here is an example of how you could use this variable from a Python script:
channel.basic_qos(prefetch_count=1)
Not setting this variable could result in a large number of unacked messages.
This value will have no effect if auto-acks are enabled on the client.
Channel Prefetch Count vs Consumer Prefetch Count
The prefetch count can be set on a channel/connection basis or on a per consumer basis.
- Channel Prefetch Count
- Consumer Prefetch Count
These are controlled using the ‘global’ variable.
global=False | applied to individual consumers |
global=True | shared between all consumers on channel |
The default value for the global variable is set to false in most APIs.
Setting the prefetch count on a per channel basis can be slow especially on clusters. It often makes sense to turn it off. You can do this by setting global to false.
If global is set to false then the limit will be applied to each individual consumer on the channel. If global is set to true then the limit will be shared between all consumers on that channel.
You can set the consumer prefetch count like this:
channel.basic_qos(prefetch_count=5, global=False)
You can set the channel prefetch count like this:
channel.basic_qos(prefetch_count=5, global=True)
You can set both of these values by including both of these lines:
channel.basic_qos(prefetch_count=5, global=False)
channel.basic_qos(prefetch_count=10, global=True)
Setting the Prefix Count
- Larger values
- Improves message delivery rate
- Communication between consumer and broker may decrease
- Smaller values
- Could hurt performance
- May be better for larger systems
- Better for keeping an even distribution of messages
- Can help to prevent overworking a single consumer
- You can just set it to one to send one message to a worker at a time.
Some recommendations:
Few consumers with fast processing | Use a larger value. |
Many consumers with fast processing time | Use a smaller value. |
Long processing time | Set the value to one. |