Endpoint attempting to resolve dependencies on startup?

Version: 7.0.0-beta0012

Not sure if this is normal or not, but I don’t recall having a problem with this before. The endpoint seems to be trying to resolve dependencies on startup instead of on request. I noticed this because I have some services that require an instance of IMessageSession and Autofac is erroring out stating that it cannot satisfy the depedency while trying to initialize a Handler that requires it.

This is on startup. The handler has not yet been invoked. I thought maybe if I started NSB sooner in the pipeline, it would make IMessageSession available sooner, but the condition worsens in this case because now even more exceptions are thrown for handlers that cannot satisfy dependencies which seems to confirm the behavior. The only other thing I can think of is that the Autofac builder isn’t being modified by reference. In other words, whatever is available at the time of builder.Build() (see config below) is what is available from that point forward.

Am I doing something wrong? Here’s the setup:

public static void AddNServiceBus(this IServiceCollection services, IConfiguration config, IHostingEnvironment environment)
    {
        var licensePath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "NSB_License.xml");
        var endpointConfiguration = new EndpointConfiguration(ServiceEndpoints.Surveys);

        endpointConfiguration.LicensePath(licensePath);
        endpointConfiguration.SendFailedMessagesTo("error");
        endpointConfiguration.AuditProcessedMessagesTo("audit");
        endpointConfiguration.UseSerialization<NewtonsoftSerializer>();
        endpointConfiguration.EnableInstallers();

        // concurrency
        var globalConcurrency = int.Parse(config["NServiceBus:GlobalConcurrency"]);
        if (globalConcurrency == 0)
        {
            var numberOfReceivers = int.Parse(config["ServiceInfo:InstanceCount"]);
            var perRecieverConcurrency = Environment.ProcessorCount;
            globalConcurrency = perRecieverConcurrency * numberOfReceivers;
        }

        endpointConfiguration.LimitMessageProcessingConcurrencyTo(globalConcurrency);

        // transport
        var transportConnectionString = config["NServiceBus:ConnectionStrings:Transport"];
        Logger.Information($"Using transport connection string: {transportConnectionString}", null);
        var transport = endpointConfiguration.UseTransport<RabbitMQTransport>();
        transport.ConnectionString(transportConnectionString);
        transport.UseConventionalRoutingTopology();

        // persistence
        if (environment.IsProduction() || environment.IsDevelopment())
        {
            var storageConnectionString = config["NServiceBus:ConnectionStrings:Persistence"];
            var persistence = endpointConfiguration.UsePersistence<AzureStoragePersistence>();
            persistence.ConnectionString(storageConnectionString);
        }
        else
        {
            endpointConfiguration.UsePersistence<InMemoryPersistence>();
        }

        // DI - Autofac is only used here to bridge IServiceCollection to NSB
        var builder = new ContainerBuilder();
        builder.Populate(services);
        endpointConfiguration.UseContainer<AutofacBuilder>(c => c.ExistingLifetimeScope(builder.Build()));

        endpointConfiguration.DefineCriticalErrorAction(context =>
        {
            var message = $"NSB Failed: {context.Error}\nShutting down...";
            Logger.Fatal(message, null, context.Exception);
            Log.CloseAndFlush();
            Environment.FailFast(message, context.Exception);
            return Task.CompletedTask;
        });

        // ensure the bus starts
        var session = Endpoint.Start(endpointConfiguration).Result;

        // make it referencable via factory
        services.AddSingleton<IMessageSession>(p => session);
    }

first and most important questions:
From your description, it sounds like you’re trying to resolve services from a message handler which in turn require IMessageSession as a dependency. That would be a configuration you’d have to avoid at all cost as it’s incorrect to use the message session within a message handling context. Notice that we have written a plugin called UniformSession which helps to always resolve the correct dependency, you can read more about it here: Uniform Session • UniformSession Support • Particular Docs

regarding your instance registration

whatever is available at the time of builder.Build() (see config below) is what is available from that point forward.

I’m no autofac expert, but that is my understanding on how autofac works. This configuration would not be able to resolve the session from the autofac container, but tbh, that seems to be correct as nservicebus itself should never resolve the message session (as nservicebus only resolves dependencies as part of the message handling pipeline). There would be workaround by registering the session as a lazy type, but first I’d recommend to make sure that the message session is not resolved at incorrect locations.

1 Like

I see. It did feel a bit circular to me. So the service it’s trying resolve is just a class that needs to do some work to figure out what the next message is that needs to be dropped on the bus, so it needs a session to talk to. Since that service is a dependency of the the handler (the handler calls this service as one of the steps), it’s asking for that session while it’s constructing everything.

Would it be a better move to instead, not inject that service and just instantiate it within the handler.Handle() method and inject the IMessageHandlerContext? For example:

public MyMessageHandler(IDependency1 d1, IDependency2 d2)
{
	_d1 = d1;
	_d2 = d2;
}

public async Task Handle(MyMessage message, IMessageHanlderContext context)
{
	var myservice = new MyService(_d1, _d2, context);
	
	// do stuff
	
	await myservice.DoOtherStuff(...);
}

Hey @jstafford, sorry for the late response! Some thoughts on your last comment:

Have you considered to redesign your service slightly in one of the following directions:

  • If your service is only responsible to make a decision on what to send next, pass it the values it needs and then use the response from that service in the handler to actually send the desired message. This way you can easily decouple your service from the NServiceBus specific dependencies.
  • Pass the IMessageHandlerContext as a parameter to the service instead of using DI. This is the general approach we recommend when trying to share the context with dependencies. This more functional approach makes the scope of the context more explicit, makes your code easier to test and the dependencies easier to manage (and also much less risky in regards of sharing the wrong context/session). Passing the context as a parameter ensures that you do not hold a reference to the context longer than the lifetime of the context.

After chewing on my pen for a while, trying to accept this design, I went with your second suggestion. Instead of injecting the service, I instantiate it within the handler and pass in the IMessageHandlerContext as a ctor argument.

Thanks!

Good to hear that you found a working solution for your application. We are aware that the design is enforcing some changes in regard to previous NServiceBus versions but we still believe it’s the best option. We have seen many cases were incorrect DI configuration and shared services caused issues as:

  • Using an IMessageSession instead of the IMessageHandlerContext, causing potential duplicates as the message session does not participate in the handler context’s transaction.
  • Using a IMessageHandlerContext from a previously processed message, causing exceptions due to accessing already completed transactions.

Getting the DI right is not trivial and something which can easily go wrong while running in production without noticing it for a while. That’s why we’re trying to be a bit more explicit and also recommend to be more explicit about the context/session usage. As mentioned before: We offer a UniformSession package which brings back IBus-like semantics and which is optimized for DI as it ensures that you’re always working with the right context or session at any time.

I hope these reasons make sense to you and give you a better understanding of our design choices.