How to properly approach this scenario: sagas?

Situation:

we currently use NServiceBus with SQL for transport/persistence set up as pub/sub.

The databases of the receivers can be approached by one or multiple Web APIs. However when we’re syncing data for a specific customer we want to be able to block the API to read the data for that customer while we want to keep the rest open. As such we simply check on each call of a bit flag is set to 1 in a database for customer X.

Future scenario:

We want to publish a signal that flips the big flag in the respective receivers for customer X to block API calls. Then send an amount of messages (could be 10, could be several million). After all messages have been handled by the respective receiver the bit flag for that customer should be flipped to 0.

The big question:

Which approach would be suitable for this? A saga? Something else? Abandon the usage of NSB in this case?

For the saga part: would it be possible to set up a saga that acts like one big transaction that covers the future scenario? If so, how?

Hi @Kris_van_der_Mast,

You’re proposing two technical solutions, but we’re not familiar with the (functional) problem you’re trying to solve.

  1. Why is the API accessed? Is it for querying data or for storing data?
    1. If it’s for storing data, does it put a message on the queue? If so, why does it need to query data?
    2. If it’s querying data, why would it not be allowed to query data? What is happening with the data that it’s not allowed to query it?
  2. For the future scenario, someone/something needs to be in control to start the process. Who is going to publish the signal and why is it going to send the signal?
  3. A saga could be used to automatically publish another message (signal?) after an x-amount of time to flip the bit back.

I’m not familiair with the scenario, but somehow it feels like there are alternative scenarios where data possibly can still be accessed and not flags or signals are needed.

If it’s too complex to explain, you can also email us at support@particular.net and we’ll see if we can get on a call. If you mention my name, we can also speak Dutch if you like. But we can also continue here if that suits you better.

Hi,

I can take it up with you via support if that suits you better, we have a paid version at the company.

But to answer here for the other interested people:

  1. The API’s are going to perform read queries. These are accessed by third party, could even be external to the company.
  2. we’re going to start the process.
  3. less preferred

A bit more background info:
we have 2 publishers. A change data capture which listens to an IBM mq which receives loads of messages (every insert, update, delete in a database). We publish this on NSB to a multituted of receivers.
The second publisher is one that starts reading from the main database (to which the other publisher listens to changes) when we tell it to do so. It’l read all data for a certain customer and publishes that on the same NSB. The message types are the same for both publishers.

Bottom line: pub1 listens to changes and directly pushes on NSB, pub2 gets put into action by a person, reads all historical data for a customer and pushes it on NSB.

As we want to prevent that while pub2 is running that 3rd parties can read data which is not complete we want to block them from reading until all receivers have finished processing the messages sent by pub2.

pub2 was initially set up to send data of historical customers to interested receivers. It also became the “backup” for when pub1 would’ve missed something and we want to send all the historical data again for a customer to our receivers to make sure they have the best possible data available.

a pub2 run for a customer varies mostly between several minutes to process up to 24 hours so far.

So, we’re now thinking of sagas as a “super transaction” around “transactions” of potentially millions of messages. Another approach we might consider is to kick out NSB for pub2 and directly write the data to interested databases and do all the switch flicking synchronously. That will give other headaches and goes in against our decoupling strategy but can be done.

Kris

The “super transactions” is what I thought of as well, except that technically you likely don’t want to do this using regular SQL transactions. I’m not even sure a saga is required here. A transactions should be as short as possible and definitely not take 24 hours! :wink: Sagas are usually convenient when time is involved. For example where messages can arrive out-of-order or where you need timeouts because some other endpoint might not response in time.

Instead of blocking data, would it be possible for one of the two following options

  1. Have clients read from a cache?
  2. Prepare everything in mirrored tables (or even mirrored database) somehow and have SQL Server copy the data itself?

I know SQL has the ability to extremely rapidly copy data over from one database to the other, like SqlBulkCopy and there are other options like SSIS. But you’d need to verify which one works best.

Either way, the API and third parties don’t have to stop for a while reading data. They can just keep working. They will work on stale data, but it depends on how stale data can be, how strict you need to be in them reading data. Once data is queries using the API, the data is already stale. It’s just stale for a few seconds. The fact that you can “catch up” with data with a manual execution, already kind of proves that stale data isn’t the biggest issue in the world. But it’s up to you to make decisions there.

With the above scenarios though, the API doesn’t need to be stopped from querying data. And some task can process the data in the background without directly modifying the data that is queries by the API.

Does that help?

Didn’t come back here earlier, my bad.

I succeeded with it in a demo app where we sent the total count of expected messages as well. However in the end we decided to go for another approach and leave the blocking of certain data in API’s behind us and this spike became an interesting approach but not in the final product.