Now and then we experience that some of our endpoints (not a specific endpoint) stops processing messages for no obvious reason (no errors). Today it happened again - after a few weeks of not occuring. When checking the issue, we saw the following:
- The endpoint in Kubernetes still has the status “Running”
- RabbitMq shows there are NO messages UNACKED, so all are on the queue.
- The last log lines (before it stop processing) states it picked up messages - so we would expect it should have UNACKED messages. We log this via a Pipeline behavior (LogIncomingMessageBehavior)
After restarting the pod - it just grabs all the messages from the queue and succesfully processes them.
I have checked the following topics, which look similar, but didn’t provide a solution for us.
Our endpoints run in Kubernetes (linux - DotNetCore 3.1).
We are hoping if someone recognizes this issue or could help us with giving more ideas to check where the issue comes from.