Temporarily pausing message processing

NB This question has been moved here from Gitter by request:

Is there a way that an endpoint can delay messages (perhaps of a certain type) when it receives them? I want the endpoint to be able to say “Hmm, I can’t process this message right now. Let me try again in 5 minutes”. Basically it’s exactly how error retrying works but I want to trigger it manually (rather than throw an exception) and I want to be able to control the retry policy differently to error handling.

Many thanks

If you can’t process messages right now why not “hold the line” and stop the instance?

It’s not ideal to stop the whole service when it’s only one message type that can’t be processed (due to a shared resource being locked).

Also, our monitoring tools will both give us a load of alerts and try to restart the service.

None of these are problems that we can’t work around, I was just hoping that there was an easy way with a bit of code in the endpoint to simply buffer incoming messages of a given type until some point in the future.

Ok, it was not clear to me that it is only for one message type. Maybe this indicates that you are hosting too many handlers in a single endpoint.

Why can’t you just rely on error recovery? You can also use a custom retry policy if you know upfront that you want to delay longer.

Stopping the endpoint does not mean exiting the process. It means doing a teardown of the current endpoint instance and to launch it again when a condition has been met. If there actually is some condition check being performed then I would really suggest extracting the handler into its own endpoint have have something controlling the state of that endpoint based on the condition outcome.

If this is only happening infrequently I would just rely on the default recovery mechanism with maybe a custom retry policy and consider hosting handlers for different messages types in different endpoints.

Can you give a bit more detail about the shared resource being locked?

Currently there’s only one handler, but anything that I implement here I would want to limit per message type–if possible–for when other handlers are added in the future.

My scenario is this: we have a search index that is being kept up to date by an endpoint processing EntityChanged messages and pushing updates to records in the index (we call this the Feeder). However, we also need to occasionally completely rebuild the index, from scratch. This is performed by a different process, called the Builder.

The problem is that any changes that are processed by the Feeder at the same time as the rebuild process is running will be clobbered. We therefore wish to lock the particular index being rebuilt so that the relevant EntityChanged messages simply queue up until the rebuild has finished.

I have so far seen a few solutions that would work but are unsatisfactory for a few reasons:
(a) Have a second endpoint, a “controller”, for the Feeder that can receive instructions to shutdown and restart the Feeder
(b) Have the Feeder check if the index is locked and throw an exception if it is, relying on a custom retry policy
© Have the Feeder check if the index is locked and, if it is, abort the message pipeline and send a scheduled copy of the message to itself
(d) Since we’re using SQL transport we have a lot of control over raw messages; it should be possible to implement what we want ourselves by removing the record for the message in question from the input queue and storing it elsewhere, moving it back when processing is ready to resume

a) isn’t ideal because we’d prefer not to shutdown the entire endpoint. If there’s a better way than "sc.exe stop " then I’d like some details
b) feels an awful lot like using Exceptions for control flow. It would also put a lot of false errors in our logs and mess with our reporting, though this is not insurmountable
c) would work but a lot of information is lost due to the fact that the message is copied (such as getting a new message ID and, I imagine, the true originating endpoint etc)
d) would make me feel a bit nervous, unless I hear from someone at Particular that it’s not really anything to worry about