We have a saga that is Expected to receive a around 200 messages when a product changes. These messages go and update elasticsearch and once all the expected n number of messages arrive a cache clear for the product page is done and saga is completed. What is happening is the number of messages left to be processed is stored in the saga state and decremented eventually to 0. All of these messages are flying in at the same time causing a barrage of concurrency exceptions in ravendb saga data.
Are there any patterns for handling this?
I know @danielmarbach mentions something similar in his nservicebus video on azure service fabric chocolate business from an audience member but doesn’t offer advice on handling the issue.
Navigate to 1 hour in the video