In our current implementation we assumed that message can be successfully send to the queue or failed because of connectivity issues. Web request sending the message will be completed with success or failure.
For actual connectivity problems, where the connection to the broker is lost, this will be true. The connection will be attempting to automatically reconnect, and any messages sent during this time will result in an exception being thrown.
So the question is what is the recommended approach if we want guarantee that message is registered in the system or original operation (web request) failed?
The only time you’ll see the behavior you’re describing is when the broker has gone into a memory or disk alarm. It then pauses publishing connections by blocking them until the conditions causing the alarms are resolved.
As long as you’ve allocated enough resources for your broker, you should not generally be seeing blocked connections.You definitely should be considering setting up monitoring for your broker to be able to detect any alarm conditions that might occur.
Based on that, I’m not sure there’s anything else at the application level that needs to change.
For the blocked/unblocked events: initial idea was to treat messaging system as non-accessible and fail the original web request when it connection is already known to be blocked via events.
I’m not sure that this would be reliable enough to be worth doing. Since the connection could be blocked at any time, it would be possible for your check to pass and start sending a message, but then the connection is blocked before the message is sent, and you’d still end up waiting for the connection to be unblocked.