Currently there’s only one handler, but anything that I implement here I would want to limit per message type–if possible–for when other handlers are added in the future.
My scenario is this: we have a search index that is being kept up to date by an endpoint processing EntityChanged messages and pushing updates to records in the index (we call this the Feeder). However, we also need to occasionally completely rebuild the index, from scratch. This is performed by a different process, called the Builder.
The problem is that any changes that are processed by the Feeder at the same time as the rebuild process is running will be clobbered. We therefore wish to lock the particular index being rebuilt so that the relevant EntityChanged messages simply queue up until the rebuild has finished.
I have so far seen a few solutions that would work but are unsatisfactory for a few reasons:
(a) Have a second endpoint, a “controller”, for the Feeder that can receive instructions to shutdown and restart the Feeder
(b) Have the Feeder check if the index is locked and throw an exception if it is, relying on a custom retry policy
© Have the Feeder check if the index is locked and, if it is, abort the message pipeline and send a scheduled copy of the message to itself
(d) Since we’re using SQL transport we have a lot of control over raw messages; it should be possible to implement what we want ourselves by removing the record for the message in question from the input queue and storing it elsewhere, moving it back when processing is ready to resume