Consumers disconnect after long idle from RabbitMQ queue #12356
-
Community Support Policy
RabbitMQ version used3.12.x or older Erlang version used26.2.x Operating system (distribution) usedUbuntu 22.04.2 LTS (GNU/Linux 5.19.0-1025-aws x86_64) How is RabbitMQ deployed?Community Docker image rabbitmq-diagnostics status outputRunning rabbit-mq service via bitnami docker image. So I can't use CLI tool to check diagnostics status. Logs from node 1 (with sensitive values edited out)
Logs from node 2 (if applicable, with sensitive values edited out)No response Logs from node 3 (if applicable, with sensitive values edited out)See https://www.rabbitmq.com/docs/logging to learn how to collect logs rabbitmq.confDidn't use rabbitmq.conf file.
Steps to deploy RabbitMQ clusterUse the given docker-compose file to deploy the service.
Steps to reproduce the behavior in questionCreate a que and consumer for the que. If we restart the consumer, then the rabbit-mq will start working again and UI will show the correct number of consumers again. advanced.configApplication codedef process_msg_wrapper(self, channel, method, properties, message):
try:
json_message = json.loads(message)
except json.JSONDecodeError:
return logger.error("Can't decode message via JSONDecode")
finally:
logger.info(f"Received message: {message}")
try:
self.process_msg(channel, method, properties, json_message)
except Exception as e:
logger.error(e)
return channel.basic_publish(
exchange="dead.letter",
routing_key=self.rabbit_mq_queue,
body=message,
properties=pika.BasicProperties(
delivery_mode=2, headers={"x-delay": datetime.timedelta(hours=2).seconds * 1000}
),
)
finally:
channel.basic_ack(delivery_tag=method.delivery_tag)
def run(self):
credentials = pika.PlainCredentials(self.rabbit_mq_username, self.rabbit_mq_password)
conn_params = pika.ConnectionParameters(
self.rabbit_mq_host,
credentials=credentials,
client_properties={
"connection_name": config.APP_NAME,
},
heartbeat=0,
)
connection = pika.BlockingConnection(conn_params)
channel = connection.channel()
channel.basic_qos(prefetch_count=1, global_qos=False)
channel.basic_consume(self.rabbit_mq_queue, self.process_msg_wrapper, auto_ack=False)
channel.start_consuming() Kubernetes deployment file# Relevant parts of K8S deployment that demonstrate how RabbitMQ is deployed |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
https://github.com/rabbitmq/rabbitmq-server/blob/main/COMMUNITY_SUPPORT.md RabbitMQ 3.12 does not receive community support.
Yes you can, by accessing the running image. You have removed the helpful information from the RabbitMQ log files by editing them. You removed the REASON for this connection being closed after 10 days, for instance:
Pika also allows for DEBUG logging: https://github.com/pika/pika/blob/main/examples/basic_consumer_threaded.py#L11-L15 |
Beta Was this translation helpful? Give feedback.
-
@urvisism RabbitMQ 3.12 and older versions are out of community support. Nothing in the provided logs serves as evidence of connection closures or consumer delivery timeouts (channel exceptions).
During a period of 5-7 days just about anything can happen to a connection. Heartbeats and consumer delivery timeouts are two most relevant "features" in RabbitMQ that can lead to a connection or channel closure. Connection recovery is something such clients absolutely will have to enable/implement, and how that is done, depends entirely on the client. Pika does not provide an automatic connection recovery feature but it does provide guidance and examples. |
Beta Was this translation helpful? Give feedback.
-
@lukebakken @michaelklishin
|
Beta Was this translation helpful? Give feedback.
There is a dedicated dox section that describes this specific message amongst other things https://www.rabbitmq.com/docs/logging#connection-lifecycle-events
Your clients' TCP connections are closed. They need to handle such events and recover.