How to use RouteToSpecificInstance?

When you have multiple physical instances of a logical endpoint, both instances share the same message queue and load balance the commands and events.

Is it possible when sending a command to specify an instance to receive the message?
Let’s say most of the time it’s okay the share the load, but sometimes I want a specific server to handle it.

I tried to set


and send a message with

var sendOptions = new SendOptions();
if (!string.IsNullOrEmpty(taskManagerInstance))

But it fail saying the destination is already specified. If I don’t call SetDestination(), it fails saying the destination is not specified…

Is this function used for what I want to do?

I’m aware if you give it a different endpoint name to both instances, they both have their unique queues and receive all events and now I can set the specific name as the destination, but this way it doesn’t work for sharing the work load.

Hi Pier

When you specify RouteToSpecificInstance the core enforces that you use the routing table.

So you would need to remove SetDestination and do the following in your endpoint configuration

var routing = transport.Routing();
    assembly: typeof(TaskManager).Assembly,
    destination: "TaskManager");

and then you can change the code to

var sendOptions = new SendOptions();
if (!string.IsNullOrEmpty(taskManagerInstance))

Hope that helps


Hi @Dunge,

I’m interested to know more about the use case. Why do you want to send a message to a specific physical instance?

1 Like

Thank you Daniel, it seems to works fine.

Adam, in this case this is a service that establish communication to network devices (cellular modems), pool data and then publish the results.

We have devices everywhere on the planet, so we want to put instances of this services on multiple regions and use the nearest one to communicate with the device.

Of course I’m aware that messaging is not designed to be cross-region. Still I have RabbitMQ setup as a cluster with instances on multiple regions and I know it’s not recommended, but it seems viable and I think the latency in message processing while staying inside the AWS network will be much lower than the latency communicating from one service to the cellular network of each devices.

The end goal is also to have a complete high availability and having each services (not just the one pooling devices) living in each region, so if one host goes down another will be ready to take over.