SQL Server Storage / Transaction logs

We have been exploring messaging alternatives to MSMQ since dotnet core doesn’t support it, so one question I have related to the SQL Transport is how the data is stored? Meaning, we’re already running it in a dev environment as a PoC and messages go in and can be processed, but the architecture I am planning to mimic is distributed in nature. Instead of having our code put messages into various queues based on different actions, I prefer to send a single message to a master queue, then it distributes to every other queue and they determine if they need to process/ignore. This keeps the architecture plan simple - send everything that happens to one place and let individual backend processes determine whether they need to do something, as opposed to core system determine which message(s) to generate/push.

All of that is to say, I know when messages get processed withing nServiceBus they get deleted from the table, but if you have the database setup for Full backups (as opposed to Simple, so you get raw transaction logs), wouldn’t each message within SQL ultimately get caught up in there, causing it to balloon in size over time? Or does it have a way to not grow transaction logs just for nServiceBus-driven tables?

Hi Scott!

Instead of having our code put messages into various queues based on different actions, I prefer to send a single message to a master queue, then it distributes to every other queue and they determine if they need to process/ignore. This keeps the architecture plan simple - send everything that happens to one place and let individual backend processes determine whether they need to do something, as opposed to core system determine which message(s) to generate/push.

Can you share some more details on what you are trying to achieve here since at first glance this would create a bottleneck and also force you to invent your own routing system rather than relying on NServiceBus to handle it for you?

I know when messages get processed withing nServiceBus they get deleted from the table, but if you have the database setup for Full backups (as opposed to Simple, so you get raw transaction logs), wouldn’t each message within SQL ultimately get caught up in there, causing it to balloon in size over time?

While a fullback up would include the logs it should remain fairly stable assuming that the log is kept at a stable size as messages are processed.

Is the reason for using a full backup that you have your business data in the same database as the queues?

Or does it have a way to not grow transaction logs just for nServiceBus-driven tables?

The transaction log should remain stable over time under normal circumstances. What kind of load (messages per second) are you expecting on the system?

We have been exploring messaging alternatives to MSMQ

Once you do select your new transport I would recommend taking a look at our new Bridge component that would allow you a smooth transition endpoint by endpoint if that is desired.

Cheers,

Andreas

Hey Andreas,

Apologies for the delay - I was pulled off of this for a bit to help on a project for another client, but I’m back on it now! Here’s the answers you were looking for:

Can you share some more details on what you are trying to achieve here since at first glance this would create a bottleneck and also force you to invent your own routing system rather than relying on NServiceBus to handle it for you?

The main principle behind this is our system has multiple backend processes - one for sending notifications (emails/texts), another for synchronizing with QuickBooks, a few more that are for outbound EDI messages and so on - some of which are not exactly performant (mainly EDI and QB). So, as an example, when a customer’s order is approved, we send a single message to a backend “eventing” queue, then it spits out copies of the same message to all background queues (which are processed via a Windows Service that’s always running/listening). From there, if QB needs synced the QB processor handles it; similarly, EDI would handle the same thing. If our main code (for when an order is approved) needed to send a message to all queues, that could get quite cumbersome. Additionally, if we add a new EDI processor, the “front-end” of the system doesn’t care about that, but the backend does. So, adding that new processor stays in its lane because any messages coming in through the centralized queue can automatically be sent to the new queue.

Hopefully that makes sense! It’s a messaging pattern that has worked very well for me for over a decade, but when dotnet core came out and Microsoft nixed MSMQ it sent me down the path that lead to you guys. I understand all the hate people had for MSMQ, but once you got used to it, it was dead simple and natively supported distributed transactions so it just worked.

Is the reason for using a full backup that you have your business data in the same database as the queues?

Yep. Since previously we utilized MSMQ this was a non-issue, but now it’s getting interested since distributed transactions also aren’t supported. So, whatever insert(s)/update(s) occur at the database level, when one or more messages needs to get pushed over to the backend queue, keeping it all in the same database enabled full TransactionScope support. I’ve been playing around with moving it to a different database, but without support for distributed transactions, it’s kind of a bust. Since these backend processes are considered mission-critical, it makes me nervous when adding messages can’t participate in a transaction because if something unexpected happens, it’s going to be a rough go.

The transaction log should remain stable over time under normal circumstances. What kind of load (messages per second) are you expecting on the system?

This one is probably more my misunderstanding of nServiceBus than anything else, since I’m trying to learn it in a crunch as opposed to getting time to really dive in. Basically, my understanding of SQL Server has always been a Full recovery mode means every insert/update/delete creates an entry in the transaction log so you can rollback to that moment in time if necessary. With these messages, I wouldn’t want that, but only for the nServiceBus tables. Separating it into a database with Simple as the recovery model is an option, but then I lose distributed transactions since it’s a different connection.

I hope that helps!

The main principle behind this is our system has multiple backend processes - one for sending notifications (emails/texts), another for synchronizing with QuickBooks, a few more that are for outbound EDI messages and so on - some of which are not exactly performant (mainly EDI and QB). So, as an example, when a customer’s order is approved, we send a single message to a backend “eventing” queue, then it spits out copies of the same message to all background queues (which are processed via a Windows Service that’s always running/listening)

This part sound exactly like the pub/sub support that NServicBus will give you out of the box. Can you take a closer look at Publish-Subscribe • NServiceBus • Particular Docs and perhaps also play with the sample Native Publish/Subscribe • NServiceBus Samples • Particular Docs to get a feel for it.

In your specific case, it would be publishing an OrderApproved event that your Notification, QB and EDI endpoints subscribe to.

If our main code (for when an order is approved) needed to send a message to all queues, that could get quite cumbersome. Additionally, if we add a new EDI processor, the “front-end” of the system doesn’t care about that, but the backend does. So, adding that new processor stays in its lane because any messages coming in through the centralized queue can automatically be sent to the new queue.

For the SqlTransport this happens natively as long as you emit the event via the Publish method. Subscriptions are stored automatically in the transport database so there should be no need to configure anything to get the behaviour you want out of the box.

Regarding the EDI flow you mention, are you talking about processing incoming orders in EDI format or are you referring to emitting EDI formatted data to integration partners?

This one is probably more my misunderstanding of NServiceBus than anything else since I’m trying to learn it in a crunch as opposed to getting time to really dive in. Basically, my understanding of SQL Server has always been a Full recovery mode means every insert/update/delete creates an entry in the transaction log so you can roll back to that moment in time if necessary. With these messages, I wouldn’t want that, but only for the nServiceBus tables. Separating it into a database with Simple as the recovery model is an option, but then I lose distributed transactions since it’s a different connection.

SQLTransport does support TransactionScope level consistency so if you are able to use MSDTC you can separate the queuing database server from your business database server.

If you want to avoid MSDTC and still get the same level of consistency you can still use separate database catalogs but they will have to be on the same database server as long as you make sure to access your business data using the same connection and transaction as the transport.

See SQL Server Transport and SQL Persistence • NServiceBus Samples • Particular Docs for more details on how to achieve this.

With any of the above approaches, you should be able to use Simple backups for the queuing database.

If you feel that it would be easier to jump on a call with us and discuss your design in more detail just send an email to support@particular.net

Hope this helps!

Cheers,

Andreas

1 Like

Thanks Andreas! Pub/sub definitely seems like the right option, but if something had registered but was currently down (such as a crash from a message trying to process or the service is off temporarily), can those messages queue up? I’ve done some pub/sub stuff previously, but it’s always seemed like it’ll publish to listeners, but if the listener isn’t there it’s just gone.

Getting rid of MSDTC is unfortunately a requirement, because dotnet Core doesn’t support it and I unfortunately don’t know how to get messages to use the same connection(s) as the parent because those are handled mostly by Entity Framework. I don’t have many places opening connections natively.

Let me know on the above, but we may reach out for a call soon! I think this platform is quite promising, but I need to wrap my head around changing stuff I’ve used for ~15 years which can be quite difficult :slightly_smiling_face:

No, when consumers (we call them endpoints) are down messages will just pile up in their input queue until they come back up again so it won’t affect the publisher and nothing will be lost.

Getting rid of MSDTC is unfortunately a requirement, because dotnet Core doesn’t support it and I unfortunately don’t know how to get messages to use the same connection(s) as the parent because those are handled mostly by Entity Framework. I don’t have many places opening connections natively.

This is possible with entity framework, this sample shows how to set it up Using the SQL Server transport with Entity Framework • SQL Server Transport • Particular Docs

Let me know on the above, but we may reach out for a call soon! I think this platform is quite promising, but I need to wrap my head around changing stuff I’ve used for ~15 years which can be quite difficult

I hear ye! Just reach out if you feel the need to discuss this more in-depth

Cheers,

Andreas