Alternative pattern to saga with major concurrency problems

Hello,

We have a saga that is Expected to receive a around 200 messages when a product changes. These messages go and update elasticsearch and once all the expected n number of messages arrive a cache clear for the product page is done and saga is completed. What is happening is the number of messages left to be processed is stored in the saga state and decremented eventually to 0. All of these messages are flying in at the same time causing a barrage of concurrency exceptions in ravendb saga data.

Are there any patterns for handling this?

I know @danielmarbach mentions something similar in his nservicebus video on azure service fabric chocolate business from an audience member but doesn’t offer advice on handling the issue.

Microservices with Service Fabric. Easy... or is it? - Daniel Marbach - YouTube

Navigate to 1 hour in the video

Hi,

What type of persistence are you using?

.m

We are utilizing ravendb persistence

RavenDB doesn’t support any kind of pessimistic locking, that with that rate of messages is the easiest solution.

In your scenario I guess the best approach is to drop the saga as a way to coordinate the update and create a simple message handler that:

  • for every received message store a very small document in RavenDB with just the product id

  • create a map/reduce index in RavenDB to count those documents, aggregated by product id

  • in the same handler as a message is received

  • store the aforementioned document

  • send a deferred message to yourself (say 10 seconds)

  • when the deferred message comes in query the index (eventually consistent)

  • if count is 200 delete all documents, clear cache, etc…

Given that you’re never touching those documents there won’t be any optimistic concurrency kicking in, obviously there will be natural delay due to the eventually consistent nature of the process.

.m