With NSB 8 I'm getting ServiceBusException: Cannot allocate more handles. The maximum number of handles is 255

Thanks for the details information. It allowed us to progress with the investigations. We failed, however, for now to reproduce it. Could you grant me read access to the code for the scheduled trigger? My github handle is szymonpobiega.

Szymon

I have given you read acces to the GitHub repo within AutomateValue (github.com). The /src/AV.Spotler.FunctionHost is the Azure function, but the same code can also be run as AV.Spotler.WorkerHost.

Most of the configuration of the endpoint is done by..ConfigureEndpointDefaults() which is located in AV.Endpoint

Thanks! I’ve looked at the code and tried to create a reproduction. So far I was not able to reproduce the problem on my side but we are working on it continuously.

I investigate more and it seems to be related when there is a ‘high’ number of updated contacts. Normally this is below 100, but I see strange things happening with one that has 670 contacts.

The logic is:

  1. Endpoint1: Scheduled Azure Function send msg CheckForUpdatedContacts every 15min.
  2. Endpoint1: FindOrCheckForUpdatedContactsHandler process the msg
    a. calls webservice to find updated contacts
    b. publish ContactUpdated for each contact
  3. Endpoint1: Saga SyncContactWorkflow is started by ContactUpdated
    a. Send ‘FindCustomerOrLeadRequest’ to Endpoint2
    b. Receives ‘FindCustomerOrLeadResponse’ from Endpoint2
    c. Send UpdateCustomerOrLeadRequest to Endpoint2
    d. Receives ‘UpdateCustomerOrLeadResponse’ from Endpoint2
    e. Completes, publish SyncContactCompleted

Big Dependency tree
I see a big dependency tree being build up in Application Insights because of the publish event in step 2, that is generating for each event a lot of traces from step 3:

Each publish event in step 2 should start a new conversation, but I now have this:

        var options = new PublishOptions();
        // options.StartNewConversation(); // doesn't work with publish
        options.SetHeader(Headers.CorrelationId, Guid.NewGuid().ToString());
        options.RequireImmediateDispatch();

I should have added options.SetHeader(Headers.ConversationId, Guid.NewGuid().ToString()); but looking at the tracing in Application Insights I don’t see the CorrelationId changing for each publish event. They are all the same. How do I start a new conversation for published events?

ServiceBusException
Then I see the duration to publish each event of step 2b building up from 300-400ms:
image

To 4-7sec:
image

Then after x publish events, I see the following error from ServiceBus being generated:
image

Azure.Messaging.ServiceBus.ServiceBusException: The operation was canceled. (ServiceCommunicationProblem). For troubleshooting information, see https://aka.ms/azsdk/net/servicebus/exceptions/troubleshoot.
 ---> System.OperationCanceledException: The operation was canceled.
   at Microsoft.Azure.Amqp.AsyncResult.End[TAsyncResult](IAsyncResult result)
   at Microsoft.Azure.Amqp.SendingAmqpLink.<>c.<SendMessageAsync>b__12_1(IAsyncResult r)
   at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location ---
   at Azure.Messaging.ServiceBus.Amqp.AmqpSender.SendBatchInternalAsync(AmqpMessage batchMessage, TimeSpan timeout, CancellationToken cancellationToken)
   --- End of inner exception stack trace ---
   at Azure.Messaging.ServiceBus.Amqp.AmqpSender.SendBatchInternalAsync(AmqpMessage batchMessage, TimeSpan timeout, CancellationToken cancellationToken)
   at Azure.Messaging.ServiceBus.Amqp.AmqpSender.SendBatchInternalAsync(IReadOnlyCollection`1 messages, TimeSpan timeout, CancellationToken cancellationToken)
   at Azure.Messaging.ServiceBus.Amqp.AmqpSender.<>c.<<SendBatchAsync>b__21_0>d.MoveNext()
--- End of stack trace from previous location ---
   at Azure.Messaging.ServiceBus.ServiceBusRetryPolicy.<>c__22`1.<<RunOperation>b__22_0>d.MoveNext()
--- End of stack trace from previous location ---
   at Azure.Messaging.ServiceBus.ServiceBusRetryPolicy.RunOperation[T1,TResult](Func`4 operation, T1 t1, TransportConnectionScope scope, CancellationToken cancellationToken, Boolean logRetriesAsVerbose)
   at Azure.Messaging.ServiceBus.ServiceBusRetryPolicy.RunOperation[T1,TResult](Func`4 operation, T1 t1, TransportConnectionScope scope, CancellationToken cancellationToken, Boolean logRetriesAsVerbose)
   at Azure.Messaging.ServiceBus.ServiceBusRetryPolicy.RunOperation[T1](Func`4 operation, T1 t1, TransportConnectionScope scope, CancellationToken cancellationToken)
   at Azure.Messaging.ServiceBus.Amqp.AmqpSender.SendBatchAsync(ServiceBusMessageBatch messageBatch, CancellationToken cancellationToken)
   at Azure.Messaging.ServiceBus.ServiceBusSender.SendMessagesAsync(ServiceBusMessageBatch messageBatch, CancellationToken cancellationToken)
   at NServiceBus.Transport.AzureServiceBus.MessageDispatcher.DispatchBatchForDestination(String destination, ServiceBusClient client, Transaction transaction, Queue`1 messagesToSend, CancellationToken cancellationToken) in /_/src/Transport/Sending/MessageDispatcher.cs:line 189
   at NServiceBus.Transport.AzureServiceBus.MessageDispatcher.Dispatch(TransportOperations outgoingMessages, TransportTransaction transaction, CancellationToken cancellationToken) in /_/src/Transport/Sending/MessageDispatcher.cs:line 92
   at NServiceBus.TransportReceiveToPhysicalMessageConnector.Invoke(ITransportReceiveContext context, Func`2 next) in /_/src/NServiceBus.Core/Pipeline/Incoming/TransportReceiveToPhysicalMessageConnector.cs:line 68
   at NServiceBus.RetryAcknowledgementBehavior.Invoke(ITransportReceiveContext context, Func`2 next) in /_/src/NServiceBus.Core/ServicePlatform/Retries/RetryAcknowledgementBehavior.cs:line 46
   at NServiceBus.TracingExtensions.<>c__DisplayClass0_0`1.<<Invoke>g__TracePipelineStatus|0>d.MoveNext() in /_/src/NServiceBus.Core/OpenTelemetry/Tracing/TracingExtensions.cs:line 28

Then at the end of the telemetry is see the exception:

Azure.Messaging.ServiceBus.ServiceBusException:
   at Azure.Messaging.ServiceBus.Amqp.AmqpReceiver+<RenewMessageLockInternalAsync>d__56.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Azure.Messaging.ServiceBus.Amqp.AmqpReceiver+<>c+<<RenewMessageLockAsync>b__55_0>d.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Azure.Messaging.ServiceBus.ServiceBusRetryPolicy+<RunOperation>d__21`2.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Azure.Messaging.ServiceBus.ServiceBusRetryPolicy+<RunOperation>d__21`2.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Azure.Messaging.ServiceBus.Amqp.AmqpReceiver+<RenewMessageLockAsync>d__55.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Azure.Messaging.ServiceBus.ServiceBusReceiver+<RenewMessageLockAsync>d__62.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Azure.Messaging.ServiceBus.ServiceBusReceiver+<RenewMessageLockAsync>d__61.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Azure.Messaging.ServiceBus.ReceiverManager+<RenewMessageLockAsync>d__18.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)

This customer with 670 updated contacts every x hours is a new customer around the same time we deployed NSB 8 so I can’t exclude 100% that this wouldn’t happen with NSB 7. But these numbers also happend before NSB 8 and that wasn’t an issue (at least not noticeable).

I got a feeling it is related to the telemetry, but that is just a guess. I didn’t disable this yet.

Can we improve this process or are there things we do wrong, that we can quickly fix?

Ok, setting the ConversationId directly using the headers is not allowed :slight_smile: . See the error below.

But there isn’t a StartNewConversation() on PublishOptions. Is there a reason for this?

System.Exception: Cannot set the NServiceBus.ConversationId header to '76ac9948-b68a-426e-ba6e-469a1bb5f80d' as it cannot override the incoming header value ('17ddb68f-d00d-4cfa-97df-af6a000844da'). To start a new conversation use sendOptions.StartNewConversation().
   at NServiceBus.AttachCausationHeadersBehavior.SetConversationIdHeader(IOutgoingLogicalMessageContext context, IncomingMessage incomingMessage) in /_/src/NServiceBus.Core/Causation/AttachCausationHeadersBehavior.cs:line 60
   at NServiceBus.AttachCausationHeadersBehavior.Invoke(IOutgoingLogicalMessageContext context, Func`2 next) in /_/src/NServiceBus.Core/Causation/AttachCausationHeadersBehavior.cs:line 23

@SzymonPobiega @danielmarbach I change a lot in our code to see if I could fix it. We disabled the OpenTelemetry. We made things simpler (get rid of a saga), solved some exceptions on our end. I also increased our hardware for the moment, so we have higher costs. But the issue still remaining.

We noticed that azure service bus connections are building up and that after an hour azure service bus starts to throttle requests. Which is very strange because we are normally very low on the # of request.

I think I have pinpointed the issue to the scheduled Azure function that we have. When we disable these functions (but the main generated NServiceBus function is still enabled), then we see our environment return to normal behavior.

Our Azure Function memory is not spiking since we disabled it Dec20 around 10am. See image:

Our ServiceBus requests/messages are normal and don’t show anymore the built up.

I am still investigating our scheduled Azure functions, mostly CheckForUpdatedContactsFunction & CheckForBouncedContactsFunction, but the code itself looks fine to me.

Hi

Thanks for the updates. So far we were not able to reproduce it but we have some follow up question that can direct our efforts better:

  • When you disable the timer trigger, does it mean that the messages stop flowing? We are especially interested if CheckForUpdatedContacts handler gets executed when this trigger is disabled
  • Do you have the timer trigger logs stored? We are interested in how the NullReferenceException you mentioned in the other thread look in the log files. We are also interested if there are exception that would indicated failed endpoint startup attempts
  • Can you also share (over a support case) the NServiceBus-specific logs
  • What is the scalability policy for the function app?

Thanks,
Szymon

Also, would it be possible to also give read access rights to the code to my colleagues Mike and Andreas?

When you disable the timer trigger, does it mean that the messages stop flowing? We are especially interested if CheckForUpdatedContacts handler gets executed when this trigger is disabled

Yes, the number of requests/connections go down and no more throttling. CheckForUpdatedContacts handler is not being executed any more.

If we sent a CheckForUpdatedContacts event through another way, it seems to work ok.

Keep in mind that both schedule Azure Function and the handler itself are running on the same endpoint. Don’t know if this could be the issue that is running in the same Scope/Transaction? My assumption is no.

p.s. we saw some issues with some customers doing an initial load of their customers, where the number of updated contacts would raise above 50k. The CheckForUpdatedContacts handler was sending >50k events and would run out of the 10min duration of the function and the locking (5min) of the message Azure Service Bus. We are sending log warnings when above 10k updated contacts and will stop processing when 25k customers is reached so that the handler can finish ok (instead of retrying the message)

===
Do you have the timer trigger logs stored? We are interested in how the NullReferenceException you mentioned in the other thread look in the log files. We are also interested if there are exception that would indicated failed endpoint startup attempts

I have added them to this post.
function_logs.zip (12.3 KB)

What I don’t understand that in this timer log, it is also showing logging from the CheckForUpdatedContacts handler. It looks it is running inside the scheduled function, in the same process? Should I run the scheduled azure functions in a separate endpoint?

What I see after the NullReferenceException in the RcpException is also an ServiceBusException. But keep in mind, I’m also see this when I have disabled the scheduled functions that run daily or shorter.
image

This ServiceBusException is:

The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue. For more information please see https://aka.ms/ServiceBusExceptions . Reference:9a874b69-5f77-446f-beeb-ade92fee53d3, TrackingId:f87b59da-90b3-45d5-bdc3-9f976aa72011_B8, SystemTracker:av-spotlerautomate:Queue:av.spotler, Timestamp:2022-12-23T06:07:11 (MessageLockLost)

===
Can you also share (over a support case) the NServiceBus-specific logs
Yes, I will try to do that.

===
What is the scalability policy for the function app?
I have 2 endpoints, 1 azure function app and 1 web app, running on the same App Plan. Default to run 1 instance, but will scale up to 3 instances with a lot of messages. But already with one instance I see this behavior.

p.s. We have a staging slot that is stopped. It is only started on deployment with all functions disabled. We then deploy to this and then execute a slot swap. After this we stop the staging slot again.

==
Also, would it be possible to also give read access rights to the code to my colleagues Mike and Andreas?

No sorry, that would cost me 2 additional licences.

Hi

I think you lost me here. You said this

and then

My understanding was that there is a single function app that has the timer triggers and also processes the messages that are generated by these triggers and two non-function endpoints that are processing other messages. These workers are doing fine, right?

Sorry, I think I now understand what you meant. There are two separate log files because each trigger has its own log file and you were wondering why the entries related to the handler are in the log files that should only log the executions of the timer triggers.

I am pretty sure this is the reason:

FunctionsLoggerFactory.Instance.SetCurrentLogger(functionContext.GetLogger("NServiceBus"));

This code is executed both in the context of handling a message and in the context of sending a message so what is happening is the triggers “fight” for NSB logging, resulting in the log statements being scattered between the two files.

We are not looking at the logs side-by-side. It looks like the NREs are coming from the handler and we suspect that they might be related with the parallel sends in the handler.

Hi!

We have another question. We were looking at this fragment of the logs:

2022-12-23T06:10:01.428 [Warning] A duplicate diagnostic entry was renamed from Azure Service Bus transport to Azure Service Bus transport-20.
2022-12-23T06:10:01.551 [Information] Executed 'Functions.CheckForUpdatedContactsFunction' (Succeeded, Id=e37a2cb2-b839-49e8-acf0-a8fd11774e12, Duration=1560ms)
2022-12-23T06:40:00.027 [Error] Timeout value of 00:10:00 exceeded by function 'Functions.CheckForUpdatedContactsFunction' (Id: '599ee642-96ff-4b58-b2e0-a6e8bedfe8d2'). Initiating cancellation.

The last line hints at a timer trigger execution that started around 6:30 but there is no indication of it in either of the log files. Could there be another log file missing?

@SzymonPobiega Sorry for the late response. Holiday time :slight_smile:

Yes, that’s what I meant about the logging. I saw messages in the function log, logging from other functions (but smae function app). I was confused by that.

I am not sure this is connected. The strange thing is the handlers don’t do a lot most of the time. They are checking for updates, but most of the time there are no updates. When I turn on the timer functions in the middle of the night (=zero updates from customers), the build up from open connections still happens. Once I disable a timer functions, it drops all the connections and every returns to normal.

BTW What I see happen in the Azure Functions is sometimes this error (after a couple of RcpExeptions) which I don’t understand why this is happening. It also happens when the timer functions are off.

Azure.Messaging.ServiceBus.ServiceBusException:
   at Azure.Messaging.ServiceBus.Amqp.AmqpReceiver+<RenewMessageLockInternalAsync>d__56.MoveNext (Azure.Messaging.ServiceBus, Version=7.8.0.0, Culture=neutral, PublicKeyToken=92742159e12e44c8)
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e)
   at Azure.Messaging.ServiceBus.Amqp.AmqpReceiver+<>.......

"The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue. For more information please see Azure Service Bus - messaging exceptions (deprecated) - Azure Service Bus | Microsoft Learn . Reference:5500395d-88b7-4409-b3f1-9b9db1baa4d0, TrackingId:bf88ad02-dabe-457b-ae15-3db1f133ab54_B8, SystemTracker:av-spotlerautomate:Queue:av.spotler, Timestamp:2023-01-05T03:49:51 (MessageLockLost) "

I created a support ticket just now, with all the function logs of today added.

Hi

We’ve been able to identify a likely cause of the issue. In line 57 in the ApplyDefaultConfiguration method there is a call to UseTransport. While it looks innocent, it seems to cause the problem as I was able to reproduce it using our minimal functions sample

The root cause is the fact that in order for the functions to work, the functions setup replaces the transport you configure with a shim serverless transport that does not pump messages (like regular transports do) but offers API so that we can pass the message that we get from the functions host via the trigger.

At the point of time when line 57 is executed, the transport of NServiceBus is already set to that shim and line 57 replaces it with a regular transport. This causes the shim to be not started so the messages passed from the host are not being processed (resulting in lock renewal problems) while the process pumps messages like a regular (non-serverless) endpoint.

We are fairly sure that removing the code in line 57 should fix the issue. We are also actively working on reviewing the APIs of our serverless packages (both for Functions and for AWS) to remove glitches such us this one.

Sorry for the problems and thank you for your patience and all the information you shared with us. Without it we would not be able to identify the issue.

Sz

1 Like

That is great news to hear! Thanks for finding the possible cause.

I have implemented the workaround (not calling UseTransport in an Azure Function) and deployed this. I enabled the timer functions again, and I will monitor this. I will let you know if it was effective.

============
Update 2h later => So far, so good. No building up connections and very stable!

2 Likes

Hi Remy,

How is your system going? Is it still stable after this change?

Mike

Hi @MikeMinutillo,

It’s very stable. No issues anymore related to this. Also, the duplicate diagnostic entries are not happening anymore.

Thanks for the help!

1 Like

Hi @remyvd,

we are investigating potential improvements to our serverless configuration API. This conversation came onto our radar.

Do you remember the reason why you were using UseTransport API? Were you looking for some specific configuration setting unavailable on the serverless configuration?

Hi @tmasternak,

The reason is that I have centralized the configuration of endpoints and was calling the helper method (=extension methods).

After this issue I changed the code for configuring the endpoints. It now looks like:

[assembly: NServiceBusTriggerFunction("%ENDPOINT_NAME%", TriggerFunctionName = "NServiceBusTriggerFunction")]

Console.Title = "AV.Spotler.FunctionHost";

var host = new HostBuilder()
    .ConfigureFunctionsWorkerDefaults()
    .ConfigureEndpointDefaults()
    .ConfigureAppConfiguration((context, builder) =>
    {
        builder.AddUserSecrets<Program>();
    })
    .UseNServiceBus((configuration, endpointConfiguration) =>
    {
        endpointConfiguration.AdvancedConfiguration.ConfigureEndpointDefaults(configuration, false);
        
        // endpointConfiguration already has transport configured with ServiceBus so configure directly
        endpointConfiguration.Transport.ConfigureEndpointDefaults();
        endpointConfiguration.Routing.RouteToEndpoint(typeof(RetrieveAndSyncContactToSpotler).Assembly, "AV.Dataverse");
        endpointConfiguration.Routing.RouteToEndpoint(typeof(SyncContactToSpotler).Assembly, "AV.Spotler");
    })
    .ConfigureServices((context, serviceCollection) =>
    {
        serviceCollection.AddNamedHttpClientsWithOAuth1ForSpotlerClient();
        serviceCollection.AddAutoMapper(typeof(ContactProfile));
    })
    .Build();

host. Run();

I still need to configure the transport, because of the subscription (rule) naming convention.

    public static void ConfigureEndpointDefaults(this AzureServiceBusTransport transport)
    {
        transport.SubscriptionNamingConvention = SubscriptionNamingConvention;
        transport.SubscriptionRuleNamingConvention = SubscriptionRuleNamingConvention;
    }
    
    static Func<Type, string> SubscriptionRuleNamingConvention => t => t.FullName!.Length <= 50 ? t.FullName : t.FullName.Substring(t.FullName.Length - 50);

    static Func<string, string> SubscriptionNamingConvention => n => n.Length <= 50 ? n : n.Substring(n.Length - 50);

Personally I think you NServiceBus should just set good defaults for the naming conventions, based on the transport type, so I don’t need to do this.

It used to. But what’s good for one client is not ideal for another. And finding a middle ground is damning. That’s why the end-user needs to select a strategy for naming and configure their conventions. What I agree about us that there could be some baked-in strategies that could be enabled with an opt-in configuration. For example: rules names using event name over full name (with a namespace) if your organization doesn’t care about namespaces and event names are unique.

1 Like