Can't resolve NServiceBus.Persistence.Sql.ISqlStorageSession

I am using entity framework core to persist data to a sql database and our endpoint registered the dbcontext to be UnitOfWork and I think this is causing issues because the connection is being shared across many different handlers asynchronously. I came across this sample as a way to register my dbcontext and still use UnitOfWork, https://docs.particular.net/samples/entity-framework-core/.

endpointConfiguration.RegisterComponents(c =>
{
c.ConfigureComponent(b =>
{
var session = b.Build<ISqlStorageSession>();

    var context = new ReceiverDataContext(new DbContextOptionsBuilder<ReceiverDataContext>()
        .UseSqlServer(session.Connection)
        .Options);

    //Use the same underlying ADO.NET transaction
    context.Database.UseTransaction(session.Transaction);

    //Ensure context is flushed before the transaction is committed
    session.OnSaveChanges(s => context.SaveChangesAsync());

    return context;
}, DependencyLifecycle.InstancePerUnitOfWork);

});

So in my endpointconfig this is how I registered my dbcontext, but when I attempt to test after deploying to our development server I am receiving the following error:

System.InvalidOperationException: Unable to resolve type: NServiceBus.Persistence.Sql.ISqlStorageSession, service name:
at LightInject.ServiceContainer.CreateDelegate(Type serviceType, String serviceName, Boolean throwError)
at LightInject.ServiceContainer.CreateDefaultDelegate(Type serviceType, Boolean throwError)
at LightInject.ServiceContainer.GetInstance(Type serviceType)
at LightInject.Scope.WithThisScope[T](Func1 function) at NServiceBus.CommonObjectBuilder.Build[T]() at mwa.CA.Life.Endpoints.BeneficiaryProcessing.EndpointConfig.<>c__DisplayClass0_0.<GetConfiguration>b__1(IBuilder b) at LightInject.PerScopeLifetime.CreateScopedInstance(Scope scope, Func1 createInstance)
at System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd(TKey key, Func2 valueFactory)
at LightInject.ServiceContainer.GetInstance(Type serviceType)
at LightInject.Scope.WithThisScope[T](Func1 function) at NServiceBus.CommonObjectBuilder.Build[T]() at mwa.CA.Life.Endpoints.BeneficiaryProcessing.EndpointConfig.<>c.<GetConfiguration>b__0_4(IBuilder x) at LightInject.PerScopeLifetime.CreateScopedInstance(Scope scope, Func1 createInstance)
at System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd(TKey key, Func2 valueFactory)
at DynamicMethod(Object[] )
at LightInject.PerScopeLifetime.CreateScopedInstance(Scope scope, Func1 createInstance) at System.Collections.Concurrent.ConcurrentDictionary2.GetOrAdd(TKey key, Func2 valueFactory) at LightInject.ServiceContainer.GetInstance(Type serviceType) at LightInject.Scope.WithThisScope[T](Func1 function)
at NServiceBus.LoadHandlersConnector.d__1.MoveNext()

It appears that the container can’t resolve the ISqlStorageSession dependency. Any ideas how I might get this to work?

Thanks,
David

I was able to resolve this error. We have a homegrown NServiceBus wrapper library that is used for setup/config and this library was referencing an older version of NServiceBus.Persistence.Sql. It was referencing v4.2 and I updated to v6.2.1 and now I can build the ISqlStorageSession dependency in my endpointconfig. However, this did not fix my issue. We are using entity framework core and injecting our dbcontext into handlers or we have services that we configure in RegisterComponents with the built in DI container where we inject our dbcontext into that service and inject the service into handlers. We appear to have a leaky connection because we keep getting the error:

System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.

I was hoping that configuring my dbcontext in my previous snippet would fix the issue, but it didn’t. I’ve also tried changing the DependencyLifecycle to InstancePerCall rather than InstancePerUnitOfWork, but we still eventually get the “Timeout expired…” error. I’ve gone through our EF linq queries and I haven’t been able to spot anything obvious that would be hanging on to the connection. It always appears we are calling .FirstOrDefault(), .ToList(), .Single()… I’ve also tried the use of SemaphoreSlim and Mutex around the dbcontext query, but still getting the same error. Any ideas or best practices how to resolve this issue?

One of our message handlers pretty consistently throws the “Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool.” This is confusing me though because this handler doesn’t have any dependency on Entity Framework’s dbcontext or any dbconnection. We do use SqlPersistence so is there something going on with the persistence that can be causing this error?

System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
at System.Data.Common.ADP.ExceptionWithStackTrace(Exception e)
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at Extensions.d__4.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NServiceBus.SqlDialect.MsSqlServer.d__33.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at StorageAdapter.d__7.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NServiceBus.LoadHandlersConnector.d__2.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at NServiceBus.LoadHandlersConnector.d__1.MoveNext()
— End of stack trace from previous location where exception was thrown —
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at CurrentSessionBehavior.d__2.MoveNext()

Hi

Is there any other place in the codebase where you open connections to the database? The exception you encounter is indeed a sign of a connection leak.

The connection created by NServiceBus for the synchronized storage session that you mentioned is used for the DI-registered EF context is created here before any handler is called: https://github.com/Particular/NServiceBus/blob/master/src/NServiceBus.Core/Pipeline/Incoming/LoadHandlersConnector.cs#L30 and closed/disposed here https://github.com/Particular/NServiceBus/blob/master/src/NServiceBus.Core/Pipeline/Incoming/LoadHandlersConnector.cs#L60. The code in the session that actually disposes of the connection is here https://github.com/Particular/NServiceBus.Persistence.Sql/blob/release-6.2/src/SqlPersistence/SynchronizedStorage/StorageSession.cs#L48 so I can’t see a way how a session SQL connection might be leaking and causing the pool to be depleted.

The code that use mentioned you borrowed from the sample is the best way to wire up the DI-registered EF session as it makes the session available throughout the application. The correct way to register it is with UnitOfWork scope but the InstancePerCall should work, too.

There is usually no need to wrap the usage of connections in semaphores when they are used by a single thread and connections should not be passed between threads. NServiceBus invokes all handlers for a given message sequentially (here https://github.com/Particular/NServiceBus/blob/master/src/NServiceBus.Core/Pipeline/Incoming/LoadHandlersConnector.cs#L45) so there is no concurrency between handlers.

Szymon

So I ran this query to find the open connections:

SELECT
ec.session_id
, last_read
, last_write
, text
, client_net_address
, program_name
, host_process_id
, login_name
FROM sys.dm_exec_connections ec
JOIN sys.dm_exec_sessions es ON ec.session_id = es.session_id
CROSS APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) AS dest
WHERE es.status = ‘sleeping’
AND program_name = ‘MyEndpoint’
ORDER BY program_name
, text;

And the queries that I see for the open connection appear to be entity framework queries where we inject the dbcontext into the handler and we’ve registered the dbcontext as indicated above where we are using the connection of the synchronized storage session. We are getting valid exceptions being thrown in some of these message handlers, but within a handler where the connection of the dbcontext was opened would the throwing of an exception prevent the connection from being closed? I would think that the GC would come along and close the connection. When I ran that query I could see connections still actively open after 30 minutes of the last read/writes.

So I’m not sure if part of my issue is that I register a couple repositories that also take the same dbcontext in their constructors. So in my endpointconfig I’m registering my dependencies like so:

r.ConfigureComponent(b =>
{
var session = b.Build<ISqlStorageSession>();

var context = new MyDbContext(new DbContextOptionsBuilder<MyDbContext>()
     .UseSqlServer(session.Connection).Options);

context.Database.UseTransaction(session.Transaction);
session.OnSaveChanges(s => context.SaveChangesAsync());

return context;

}, DependencyLifecycle.InstancePerUnitOfWork);

r.ConfigureComponent<IMyRepository>(b =>
{
var session = b.Build<ISqlStorageSession>();

var context = new MyDbContext(new DbContextOptionsBuilder<MyDbContext>()
     .UseSqlServer(session.Connection).Options);

context.Database.UseTransaction(session.Transaction);
session.OnSaveChanges(s => context.SaveChangesAsync());

return new MyRepository(context);

}, DependencyLifecycle.InstancePerUnitOfWork);

Even when I register things this way it still seems to leave dbconnections open (active connections). I tried registering so I’m sharing the same ISqlStorageSession (is b.Build() creating a new instance across the 2 different registrations?) like this:

ISqlStorageSession session = null;
r.ConfigureComponent(b =>
{
session = b.Build<ISqlStorageSession>();

var context = new MyDbContext(new DbContextOptionsBuilder<MyDbContext>()
     .UseSqlServer(session.Connection).Options);

context.Database.UseTransaction(session.Transaction);
session.OnSaveChanges(s => context.SaveChangesAsync());

return context;

}, DependencyLifecycle.InstancePerUnitOfWork);

r.ConfigureComponent<IMyRepository>(b =>
{
var context = new MyDbContext(new DbContextOptionsBuilder<MyDbContext>()
.UseSqlServer(session.Connection).Options);

context.Database.UseTransaction(session.Transaction);
session.OnSaveChanges(s => context.SaveChangesAsync());

return new MyRepository(context);

}, DependencyLifecycle.InstancePerUnitOfWork);

Registering this way actually was allowing the active connections to be closed, but I still ran into errors eventually, but it was different errors. Not that there was a timeout error or the connection pool was exhausted, but that there already is a DataReader open. Or that the Connection currently has transaction enlisted. Finish current transaction and retry.

Is there a recommended way for sharing the synchronized storage session when I have dependencies that also take the same dbcontext in their constructors?

I’m noticing that if a handler throws a valid exception that the dbcontext doesn’t seem to be closing the connections

Hi

This seems to be quite complex. Please open a support case and then we can take a look at your repository registration code and try to reproduce the problem.

Regards,
Szymon