ASB: The number of requests on the Bundle-1 topic multiplied by 10 with the RoundRobin strategy?

Hi,

Since I switched to the Round Robin strategy 3 weeks ago (I also updated the NServiceBus.Azure.Translation.WindowsAzureServiceBus package from 7.2.9 to 7.2.11 in order to benefit of the patch for the RoundRobin present in version 7.2.10) and that I use two ASB namespaces instead of one, the number of requests on the topic bundle-1 has been multiplied by 10 while the number of incoming and outgoing messages has been divided by 2 (split between the two ASB spaces).
Here are the topics metrics on bundle-1 topics in ASB1 and ASB2 where you can see the differences after applying the RoundRobin strategy:


How to explain this impressive increase in requests? Only a problem related to RoundRobin uncached startegy?
This number of requests appears to impact the performance of ASB namespaces. for example, I have the sub-queue transfer message of the queue that increases during traffic peaks. Is there a way to overcome this problem?

Conversely, as shown in the images below, the number of requests on the queues seems to have been divided by 10 (the possible cache problem would only concern the topics?) And the number of messages is well shared on the queues.

Thanks

Hi Vincent,

Could you please share your endpoint configuration?

Hi Sean,

You will find below the full configuration of the endpoint

endpointConfiguration.UseSerialization< NewtonsoftSerializer>();

var persistenceSaga = endpointConfiguration.UsePersistence<AzureStoragePersistence, StorageType.Sagas>();
persistenceSaga.ConnectionString(CloudConfigurationManager.GetSetting(“NServiceBusPersistenceConnectionString”));
persistenceSaga.CreateSchema(true);

persistenceSaga.AssumeSecondaryIndicesExist();

var persistenceSubs = endpointConfiguration.UsePersistence<AzureStoragePersistence, StorageType.Subscriptions>();
persistenceSubs.ConnectionString(CloudConfigurationManager.GetSetting(“NServiceBusPersistenceConnectionString”));
persistenceSubs.TableName(“MySubscriptions”);
persistenceSubs.CreateSchema(true);

persistenceSubs.CacheFor(TimeSpan.FromMinutes(1));

var persistenceTimeouts = endpointConfiguration.UsePersistence<AzureStoragePersistence, StorageType.Timeouts>();
persistenceTimeouts.ConnectionString(CloudConfigurationManager.GetSetting(“NServiceBusPersistenceConnectionString”));
persistenceTimeouts.CreateSchema(true);
persistenceTimeouts.TimeoutManagerDataTableName(“TimeoutManager”);
persistenceTimeouts.TimeoutDataTableName(“TimeoutData”);
persistenceTimeouts.CatchUpInterval(3600);
persistenceTimeouts.PartitionKeyScope(“yyyy-MM-dd-HH”);
var transport = endpointConfiguration.UseTransport< AzureServiceBusTransport>();

transport.BrokeredMessageBodyType(SupportedBrokeredMessageBodyTypes.Stream);

var partitioning = transport.NamespacePartitioning();
partitioning.UseStrategy< RoundRobinNamespacePartitioning>();
partitioning.AddNamespace(
name: “namespace1”,
connectionString: CloudConfigurationManager.GetSetting(“NServiceBusTransportSB1ConnectionString”));
partitioning.AddNamespace(
name: “namespace2”,
connectionString: CloudConfigurationManager.GetSetting(“NServiceBusTransportSB2ConnectionString”));
endpointConfiguration.LimitMessageProcessingConcurrencyTo(8);
transport.Queues().LockDuration(TimeSpan.FromMinutes(5));
transport.Queues().MaxDeliveryCount(5);

var sanitization = transport.Sanitization();
sanitization.UseStrategy< MyCustomSanitization>();

transport.UseForwardingTopology();
var recoverability = endpointConfiguration.Recoverability();
recoverability.Immediate(
customizations: immediate =>
{
immediate.NumberOfRetries(2);
});
recoverability.Delayed(
customizations: delayed =>
{
var numberOfRetries = delayed.NumberOfRetries(2);
numberOfRetries.TimeIncrease(TimeSpan.FromSeconds(15));
});
endpointConfiguration.SendFailedMessagesTo(“error”);
endpointConfiguration.DisableFeature< Audit>();

Thanks

At first sight it looks like the requests have moved from ‘send’ to ‘publish’, you didn’t by accident change some of your logic to do publish instead of send?