Migrate NServiceBus 5.2 to 7.2 | Scale out | AzureServiceBus Vs MSMQ

We are migration from NServiceBus 5.2 to 7.2.
Current implementation with V5.
Transport - MSMQ
Persistence - NHibernatePersistence
Logical endpoint count 13 deployed in Azure VMs.

Recently we faced a challenge in the production queue is overloaded by the influx of messages coming. We look to improve concurrency to set each endpoint MaximumConcurrencyLevel value based on Machine(AzureVM core count * 2). Does this core count based concurrency value set will help considering the current endpoint count(13)? Asking this question upfront because currently, we are not using NserviceBus monitoring tools(Service Pulse/Control) and monitoring concurrency is quite sophisticated…

What will be the right recommendation path to migrate(v5 to V7) considering the current scale-out scenario. I see Azure Service Bus transport could provide better Scale-out capability and monitoring.
Migrate to V7 with Azure service Bus Vs migrate to V7 with current MSMQ and Sender side distribution.?

Hi Anshu

Are you currently using the distributor with v5 and MSMQ?

Has that challenge that you are describing occurred on the newer version of NServiceBus?


Yes, We are currently using Distributor And current NServiceBus version is 5.2.20 .

Hi Anshu

I would make small steps at a time and not yet upgrade to Azure Service Bus. You want to be in control of the risk vectors of the upgrade and therefore sticking to the transport you are already familiar with makes most sense to me as a first step.

So I’d say upgrade all endpoints first to v6 using the provided upgrade and migration path


Once you have gone through the migration scenario you have MSMQ running on V6 using Sender Side Distribution. one of the benefits of v6 and MSMQ is also that the MSMQ transport is out of the box quite a bit faster. Once you are there you can upgrade to v7 latest version

After that you might even be able to decommission a few of your machines depending on what your load tests show you.

I would then treat the discussion whether to migrate the transport from MSMQ to Azure ServiceBus a separate thing that can be dealt with once you are running stable and smoothly on v7 with MSMQ and Sender Side Distribution.


1 Like

Thanks Daniel…!! Related to migration we had discussion on this before with @ramonsmits. I think we are same page.

Current overflown message queue is quite painful now. Could you please put some light on this. Does the approach mentioned in first post of this thread i.e making all 13 endpoint with MaxConcurrencyLevel = corecount * 2 Will help us to gain better throughput…?Or we could may encounter some unforeseen troubles later…?

Hi Anshu

It can definitely help to increase the throughput.

Or we could may encounter some unforeseen troubles later…?

that is tough to say from the outside. It depends on what the endpoints are doing. Let me make a simple example. Let’s say the handler opens up DBConnections and you have set the DBConnection pool size to 20. In this case, you would probably want to make sure that your concurrency is set to max 20 to not run into connection problems with your Database. So, in essence, I would set the concurrency in alignment with the “weakest” IO element that you are calling from within the handler. Then things should be fine.

Does that help to give you better heuristics?


1 Like

I’ll restate earlier recommendations for reference.

  1. Implement server monitoring and monitor CPU, RAM, disk and especially network IO.
  2. Upgrade to version 6 first as this has massive performance improvements and validate if you even need the distributor.
  3. If the system is slow and server monitoring is not indicating bottlenecks, increase maximum concurrency in small steps
  4. If the bottleneck is the machine hosting the endpoints (RAM, CPU, Disk IO), then move endpoints to their own machine.
  5. If you reach a point that all your machines are hosting only a single endpoint than you need to consider sender side distribution if you cannot scale up that specific machine.
1 Like