Still encountering intermittent MessageLockLost exceptions on 1.2.0


We updated to NServicebus.Transport.AzureServiceBus 1.2.0 a while ago since that release contained some fixed for the MessageLockLostException.

However we are still seeing this behavior intermittently, and honestly I’m having a hard time trying to wrap my head around what is causing it or why, I’m not even sure which messages are causing these errors.

If someone could point me in the right direction or shed some light on this, that would be most helpful.


Hi Jarrich,

This could be caused by a number of things but the underlying reason is the lock token has expired. One reason for this is that the handler might take too long to process the message which is where I would start investigating since it’s happening intermittently.

If that’s not it, we’ll need to dive into your handler and endpoint config to diagnose further. If you’re comfortable doing that here, then go ahead and post it. Otherwise, I’d suggest contacting our support for further troubleshooting.

– Kyle


Thank you for getting back at me. I am doing SQL stuff within those handlers, so I might put extra timeout limits on those commands to prevent that they take too long and cause a lock to expire.

I will report back with my findings.


Hi Jarrich

One thing to consider is also the prefetch count. See

By default the multiplier is 10. So that means it fetches ten times more messages than the concurrency is set. Once the messages are prefetched the clock starts ticking for those messages. So if handling messages is relatively slow on that endpoint you might want to also see if a smaller prefetch multiplier or a fixed number of prefetches makes sense.


Hi guys,

Thank you very much, all of this is incredibly useful. I will take a look at the PrefetchCount because I’m sure I am still using the default values there.

I’ll report back if my problem persists.