Limit parallel processing of messages

We have a NServiceBus solution hosted in a Azure Function app that updates a sub system via an API. The database behind the sub system has problems with handling near simultaneous requests.
As a temporary solution we have tried to set maximum number instances of the Azure Function App to 1 and set NServiceBus configuration setting LimitMessageProcessingConcurrencyTo to 1.

We still see that processing of a new message starts before processing of an existing message has completed.
All help would be appreciated.

LimitMessageProcessingConcurrencyTo will not help in this case as Functions implementation relies on the native Azure Functions mechanism to retrieve messages. You will need to configure your Functions App to reduce concurrency, depending on the Functions SDK you’re using.

1 Like

Thanks, that solved our problem.
Might be good to mention this in your documentation

We have the following documented:

Concurrency-related settings are controlled via the Azure Function host.json configuration file. See Concurrency in Azure Functions for details.

Source: Azure Functions with Azure Service Bus • NServiceBus.AzureFunctions.InProcess.ServiceBus • Particular Docs

I suspect the issue here is a bit of a misalignment in the API. LimitMessageProcessingConcurrencyTo is available via EndpointConfiguration and doesn’t really consider the host (Azure Functions) as it’s passing the value to whatever transport the user selects. In the case of Functions, there’s no active transport, so the setting is dismissed. What could potentially help is logging a warning about user-configured concurrency that is not taken into consideration. Raised an improvement issue here.

That is good improvement that can lead to less confusion

While @SeanFeldman already answered the question, I just wanted to share some additional thoughts on this topic. As Sean also stated in his last comment:

I suspect the issue here is a bit of a misalignment in the API.

This is absolutely true for the serverless APIs where part of the logic/features which are usually handled by the transport’s message pump is now managed by the serverless model. We’ve recently analyzed this mismatch and concluded that about 15% of the available configuration APIs aren’t really compatible or valuable in serverless environments (this includes LimitMessageProcessingConcurrencyTo). Unfortunately, this isn’t trivial to solve and there are pros and cons to all the options but we are definitely aware of the issue.