Hey Andreas,
Apologies for the delay - I was pulled off of this for a bit to help on a project for another client, but I’m back on it now! Here’s the answers you were looking for:
Can you share some more details on what you are trying to achieve here since at first glance this would create a bottleneck and also force you to invent your own routing system rather than relying on NServiceBus to handle it for you?
The main principle behind this is our system has multiple backend processes - one for sending notifications (emails/texts), another for synchronizing with QuickBooks, a few more that are for outbound EDI messages and so on - some of which are not exactly performant (mainly EDI and QB). So, as an example, when a customer’s order is approved, we send a single message to a backend “eventing” queue, then it spits out copies of the same message to all background queues (which are processed via a Windows Service that’s always running/listening). From there, if QB needs synced the QB processor handles it; similarly, EDI would handle the same thing. If our main code (for when an order is approved) needed to send a message to all queues, that could get quite cumbersome. Additionally, if we add a new EDI processor, the “front-end” of the system doesn’t care about that, but the backend does. So, adding that new processor stays in its lane because any messages coming in through the centralized queue can automatically be sent to the new queue.
Hopefully that makes sense! It’s a messaging pattern that has worked very well for me for over a decade, but when dotnet core came out and Microsoft nixed MSMQ it sent me down the path that lead to you guys. I understand all the hate people had for MSMQ, but once you got used to it, it was dead simple and natively supported distributed transactions so it just worked.
Is the reason for using a full backup that you have your business data in the same database as the queues?
Yep. Since previously we utilized MSMQ this was a non-issue, but now it’s getting interested since distributed transactions also aren’t supported. So, whatever insert(s)/update(s) occur at the database level, when one or more messages needs to get pushed over to the backend queue, keeping it all in the same database enabled full TransactionScope support. I’ve been playing around with moving it to a different database, but without support for distributed transactions, it’s kind of a bust. Since these backend processes are considered mission-critical, it makes me nervous when adding messages can’t participate in a transaction because if something unexpected happens, it’s going to be a rough go.
The transaction log should remain stable over time under normal circumstances. What kind of load (messages per second) are you expecting on the system?
This one is probably more my misunderstanding of nServiceBus than anything else, since I’m trying to learn it in a crunch as opposed to getting time to really dive in. Basically, my understanding of SQL Server has always been a Full recovery mode means every insert/update/delete creates an entry in the transaction log so you can rollback to that moment in time if necessary. With these messages, I wouldn’t want that, but only for the nServiceBus tables. Separating it into a database with Simple as the recovery model is an option, but then I lose distributed transactions since it’s a different connection.
I hope that helps!