NB This question has been moved here from Gitter by request:
Is there a way that an endpoint can delay messages (perhaps of a certain type) when it receives them? I want the endpoint to be able to say “Hmm, I can’t process this message right now. Let me try again in 5 minutes”. Basically it’s exactly how error retrying works but I want to trigger it manually (rather than throw an exception) and I want to be able to control the retry policy differently to error handling.
It’s not ideal to stop the whole service when it’s only one message type that can’t be processed (due to a shared resource being locked).
Also, our monitoring tools will both give us a load of alerts and try to restart the service.
None of these are problems that we can’t work around, I was just hoping that there was an easy way with a bit of code in the endpoint to simply buffer incoming messages of a given type until some point in the future.
Stopping the endpoint does not mean exiting the process. It means doing a teardown of the current endpoint instance and to launch it again when a condition has been met. If there actually is some condition check being performed then I would really suggest extracting the handler into its own endpoint have have something controlling the state of that endpoint based on the condition outcome.
If this is only happening infrequently I would just rely on the default recovery mechanism with maybe a custom retry policy and consider hosting handlers for different messages types in different endpoints.
Can you give a bit more detail about the shared resource being locked?
Currently there’s only one handler, but anything that I implement here I would want to limit per message type–if possible–for when other handlers are added in the future.
My scenario is this: we have a search index that is being kept up to date by an endpoint processing EntityChanged messages and pushing updates to records in the index (we call this the Feeder). However, we also need to occasionally completely rebuild the index, from scratch. This is performed by a different process, called the Builder.
The problem is that any changes that are processed by the Feeder at the same time as the rebuild process is running will be clobbered. We therefore wish to lock the particular index being rebuilt so that the relevant EntityChanged messages simply queue up until the rebuild has finished.
a) isn’t ideal because we’d prefer not to shutdown the entire endpoint. If there’s a better way than "sc.exe stop " then I’d like some details
b) feels an awful lot like using Exceptions for control flow. It would also put a lot of false errors in our logs and mess with our reporting, though this is not insurmountable
c) would work but a lot of information is lost due to the fact that the message is copied (such as getting a new message ID and, I imagine, the true originating endpoint etc)
d) would make me feel a bit nervous, unless I hear from someone at Particular that it’s not really anything to worry about