Anyone using SqlStreamStore with NServicebus Outbox?

I’m currently using the NServicebus Outbox to scope our outgoing events into a single transaction. I noticed an issue though with SqlStreamStore in that it doesn’t use the ambient transaction scope supplied by NServicebus. Based on some discussion over on their github page, it seems that this might not be possible (as SSS disposes of the transaction before NServicebus has a chance to commit the changes).

Is anyone else here is using Outbox with SSS? How are you handling the transaction scope between these two technologies/layers? Are there any architectural level ways of using NServicebus Outbox that would make it compatible with SSS? I have to imagine someone has it working - or at least, some experiences (even if not successful) trying to get them to work together?


Hi Marcus

Can you describe a bit more the things you are doing in your message handler? Are you sending out NSB messages and appending events to SSS?

Since version 6 NServiceBus does not use ambient TransactionScope in the Outbox mode. It exposes the outbox-managed connection via [SynchronizedStorageSession[( but looking at the SSS docs, it does not help much because SSS seems to not expose any API that would allow you to pass an external connection/transaction pair.

SSS is based on a different model of idempotency than the Outbox. SSS seems to be based on the assumption that the order of incoming messages is always preserved (which is true for streams but not true for queues) and the business logic is deterministic. The idempotency check is output-based so if the output of of an append is a substring of the stream, it is ignored (as duplicate).

The NServiceBus model is more relaxed. The Outbox does not require strict order of incoming messages nor deterministic processing.

Yes. As an example of where we are running into an issue, we have an event being published which is picked up by multiple handlers in a single domain (eg, OrderSent in a CustomerService endpoint). This requires multiple handlers as they need to update individual aggregates in the domain. Those handlers are in charge of appending new events to SSS, which then broadcast these events once all the handlers complete using Outbox. If one of the handlers throws an exception, Outbox rolls back the events being published across all the handlers, but SSS is unaware of the transaction and doesn’t roll back the insertions into the database, breaking the idempotency. So short answer, yes we are receiving an event with multiple handlers, appending events to SSS then publishing new events once that completes.

Maybe I misunderstood the docs - In the Outbox examples it mentions that it uses a single TransactionScope (however it is casting it to manually provided SqlTransaction). Is that not ambient TransactionScope, or am I just misunderstanding the docs?

That seems to be the roadblock I ran into. While SSS appears to lock this option out, I was hoping NServicebus might have had a way to work around that limitation. I guess I’m surprised others have not run into this.

Edit - I was reading through this thread and it seems that one way of resolving this is moving to an event publisher that reads from the SSS event store. That way instead of writing and publishing events in the same step (which Outbox covers half), we append the new events in SSS and then have a separate event publisher that reads that database and publishes the new events as they appear. If this direction makes sense, do you know of any articles/sample-code that might take a similar approach?


Unfortunately the Outbox sample you mentioned is wrong. I think we have not updated it when we changed the Outbox behavior. The code snippets are correct, though. The Outbox works by sharing the connection/transaction pair between the Outbox infrastructure that de-duplicates incoming messages and stores outgoing messages and the business logic (or saga persistence). But any way it would not help in your case.

Do I understand correctly that each of your handlers invokes business logic of an aggregate, persists the changes to the aggregate by appending events to SSS stream and publishes some events via NServiceBus?

Regarding your aggregates, do you store information about processed messages inside each aggregate or do you rely on SSS built-in idempotency?

That is correct.

We store information about processed messages in SSS as metadata, but the aggregates themselves don’t have access to these fields. We repopulate some of these values back into the original message headers when SSS deserializes the events (so we have access to the original message headers at that level).

I guess the easiest way to make the logic idempotent would be to store information about processed messages in the SSS events and then somehow make it accessible to check if a given aggregate has already processed that message. Then you can have one SSS Append call per aggregate without the need to synchronized them, either with other aggregates or with the NSB Outbox.

That makes sense, I think that is the direction we will take. There’s some great additional information over on your blog posts at Exactly-Once. Right now we are (attempting) to use an 'All-or-Nothing" approach (when it turned out we were actually using a “No-Transactions” model), where it seems we need to move to an “Atomic Store-and-Send” approach.

I think storing the messageId in SSS and using that to publish the messages that Outbox would have sent will work. So in the case of retrying a failed message, we can check to see if the aggregate has already been saved with events from the same messageId. If it has, we can queue up the events that match that messageId from SSS to NServicebus, skip saving to SSS and then continue on with the rest of the handlers. If it falls over, the messages are discarded and process starts again with no side-effects. If it completes all handlers successfully, the batched events in Nservicebus get published.

Thanks Syzmon for your help.

Hi @Jimmy_JimJim

My colleague @tmasternak just published a new blog post describing a way to implement de-duplication that fits well with event sourcing approach. You might find it useful:


Thanks Syzmon, that blog post certainly adds some more guidance to what we are trying to achieve.