IEndpointInstance + IHandleMessages + RabbitMQ = bindings

Hi,

Weird thread subject, but it’s difficult to say in few words what my question is.
Starting from the tutorial NServiceBus Quick Start • Particular Docs where the solution contains multiple startup projects.
Continuing with the excellent blog Default Topologies - NServiceBus with RabbitMq Part 1 — Jack Vanlightly to play with the routing topologies.

Then I started to play with my own Sandbox solution, composed with only one startup project, it means that I create multiple endpoint instances and multiple message handlers in a single console application, all endpoints using conventional routing.

What a surprise to see in the RabbitMQ exchange fanouts that all the endpoint queues are bound to all the message events !
Consequence : one event published is mapped to several queues, and therefore is processed multiple times by the handlers…

There is something wrong somewhere, I don’t really understand how to control that effect.
Does it come from the scanning of the assembly that discovers the handlers and bind them to the exchange ?

I didn’t found examples where all the endpoints and handlers are located within the same assembly, may be I’m wrong to want doing that ?
What I’m looking for is to get a single app (windows service) that contains all the endpoints and all the handlers that I want to develop. Is that a bad pattern ?

Hi Eric,

You are absolutely right. Because of assembly scanning, each endpoint thinks it’s responsible for each handler found. You can solve this however. We explain how in this sample about multi hosting : Generic Host multiple endpoint hosting • NServiceBus Samples • Particular Docs

Let me know if this helps you.

I’ve posted this into a new reply, so that you can respond to both comments individually. Is there a reason you’re hosting all endpoints inside a single host? Is it because it’s just a sandbox solution? Because there are other alternatives.

We usually recommend separating handlers over various endpoints. If you’d like to know more about the how and when, we can set up a conference call so I can explain it to you.

All these endpoints would then be hosted by their own host. An example is using Windows Services. A nice document on how to do this easily can be found here. We even make it easier for you by providing dotnet new templates.

If you’re looking for alternative hosting options, you can find them summarized here.

This troubles me. I understand that pattern in a micro service environment, but even in such environment, that pattern has a very low maintenability level.

I’m used to work in companies where projects takes long time to arise, with a lot (hundreds) of flows (and therefore queues). I can’t imagine to have a separate console (even service) application for each endpoint.
Do you imagine the workload to manage all these distinct applications (déploiement, monitoring, maintenance, auditing…)

It would be interesting to have the possibility to filter on handlers to include, instead of handlers to exclude. It was the case in previous version (4 or 5 ?), isn’t it ?

that pattern has a very low maintenability level.

As always, it depends. There are tradeoffs for both.

  • Having every handler in a separate host would increase maintenance of all hosts
  • Having all handlers in a single host would hurt throughput as all messages arrive in a single queue.

But theoretically, you could have every single handler in a separate assembly and create a generic host, with just the name inside configuration. Or based on the namespace the handler is in, or something like it. You would then only deploy the handler to the location of its host, restart the host and be done with it. Only issue then is that you need to configure your continuous deployment scripts to support this.

There are several reasons why you can/should combine handlers. Sometimes they share storage, sometimes they have the same SLA, the same consistency needs, etc. This depends on the context.

It was the case in previous version (4 or 5 ?), isn’t it ?

Correct, we removed it in v6 for various reasons. Not all are documented. :slight_smile:

This is not the idea to use a single queue, of course for throughput issue, but also for manageability.

Indeed, I prefer to separate the messages in dedicated queues. One queue per message type. This gives a better understanding of the flows, especially when problem occurs. When messages falls into a dead letter queue (error queue), it is important for the Operation Team to exactly know what to do.

Therefore, I need to create an Endpoint per message type, the Endpoint binds of course to a queue, and the handler dedicated to the message type handles the message.

So, like the discussion thread states : Queue → Endpoint → Handler = a basic flow.
And : Basic flow 1 + basic flow 2 + … + basic flow N = Domain flow.

The domain flow is running in one single NSB host, but deployed to multiple servers in order to get a distributed high-available solution.

I’ll continue in another post to give a conceptual view of what I want to get

Here’s is the conceptual view of the flows.

I work in a Transportation company, which is organized in departments called Solution Centers. One solution center for each main company’s domain.
Here’s an example with two easy-to-understand solution centers : Transport and Sales (obvious :slight_smile: )

In each department we’ll find some specialized domains, like Real Time, Planning, Sales, …

Each specialized domain contains several flows, for instance in Real Time flows, we’ve vehicle position flow, Waiting times flows.

Each of these flows is composed of several queues, because there are several producers and consumers that take part in the flow.

In the drawing, I show an example where the Realtime flows is composed of different endpoints.
This pattern is reproduced many times for many flows types.

My goal is to get a NSB host per domain flow (ex: Real Time Flow in one host).

Here’s a concrete example.

Green balls : Endpoint + Handler
Violet : queues

Outbound there is the producer on the left side and two consumers on the right side

The arrows marked “Distribution” means that there is a separated (independent) NSB distribution process which distribue the messages to subscribers depending on the message Type.

Here under, another example, based on the same drawing convention, that shows the complexity arising quickly, even for a single Domain Flow:


(I blured the drawing for confidential reason)

I hope that I’m enough clear to make you understand my initial question about the granularity of the NSB hosting.

Hi,

I think you’ve had your question answered by now, but if not, let me chime in quickly. I use one Windows Service for each logical functionality grouping (i.e. one per project/Domain). Each Windows Service hosts N endpoints. Using Routing configuration, these endpoints are configured to only process subsets of the Domain messages.

I feel this gives me better logical control over each domain/project, and I can spin up other instances of the service, and indeed even skip scaling out specific endpoints if needed, to scale.

I prefer the multi-endpoint hosting as it gives me the illusion of control ;). Besides, It’s easier to explain to the Ops guys what each service is responsible for, without having to list out 20+ services.

Are you using events ? As for Commands I fully understand but I’ve trouble with events, as their routing is managed by the NSB framework, as far as I understand.

Can you maybe show me some example ? It 'd help me a lot in my investigations…

Disclaimer : We’re getting into territory of domains and architecture. Technical questions like “How do I route MessageX to endpointY” or “How do I set up logging” are much, much easier to answer. These domain and architectural questions suffer from assumptions and misinterpretations that can lead to misunderstanding. I apologize if I understand or assume something incorrectly. :slight_smile:

Now allow me to ask some questions

  1. In the conceptual view you’re showing solutions centers and a more detailed view of the real time flows under the transport solutions center. Does MOM has something to do with messaging or a servicebus?
  2. Do different flows inside solutions centers send messages to each other?
  3. You mention “green balls”, “violets” are queues, producers and more. What I don’t understand from the concrete example. But the Tonnage Adapter In already has two queues?
  4. What is A113 and what does SP mean?
  5. What do you mean by mentioning distribution? Does it mean you do data distribution? Is it just sending a message (either command or event) to the other endpoints?

I’m also not sure if I made the following terms clear.

Handler

This is the easiest one, a handler processes a message. Technically, we usually have a single handler in a single class. So that would result in something like the following pseudo code:

public class OrderReceivedHandler : IHandleMessages<OrderReceived>
{
  public Task Handle(OrderRecevied message)
  {
    // Some code
  }
}

However we can have

  • Multiple handlers inside a single class
  • Multiple handlers inside different classes for the same message-type.

Endpoint

An Endpoint is basically a container for one or more handlers, sagas, etc. We talk about logical endpoint during design and development. We talk about physical endpoint or endpoint instance after deploying them. You start out with a single endpoint instance, but if you want to scale out or achieve high availability, you might have multiple instances of the same logical endpoint.

The most common scenario is hosting multiple handlers inside a single logical endpoint. A less common scenario is using a single logical endpoint for all handlers. No customer that I am aware of, is using a single logical endpoint for every single handler.

Every endpoint has its own queue. By default the name of a logical endpoint is the name of the queue, although specific transports might adjust the physical name as they see fit. Thus if you have multiple endpoint instances, they all read from the same queue. The only exception is MSMQ, because technically it works differently.

An endpoint is hosting agnostic.

Host

A host is something that enables an endpoint to function. We can have a Windows Service as a host, or a simple console application. We can also use Azure Service Fabric or IIS (a web application) to host our endpoints.

As far as I know, almost all customers use a single host per logical endpoint, not taking scaling into account. This means hosting multiple endpoints per host is rarely seen in the wild. The biggest reason is that it’s technically more complex and usually doesn’t have any benefit.

That being said, I’ve seen customers with dozens of hosts, all hosting a single logical endpoint. All having multiple handlers, except for maybe only a few. So (hypothetically) if a customer has 250 different message types (which is a large number!!!) and you put 10 handlers inside an endpoint, you’d end up with 25 physically deployed endpoints. Assuming a single handler per message type and no scaling out. That’s not an odd scenario, besides the fact that you might need to convince operations

Some remarks

It was mentioned by @macdonald-k that he uses

one Windows Service for each logical functionality grouping

Does this mean you have multiple handlers inside a logical endpoint, or something else?
You mentioned “per domain” and in my experience you can easily start out with a single logical endpoint per domain, but it’s definitely not uncommon to have several endpoints per domain. It also depends on consistency, scaling and other requirements.

Also @macdonald-k mentioned

and I can spin up other instances of the service, and indeed even skip scaling out specific endpoints if needed, to scale.

We sometimes see customers with a large number of handlers in a single endpoint, requesting information on how to scale out. Our first suggestion is to not have too many handlers inside a single endpoint. That’s the first and easiest thing to scale out.

Obviously having a single handler in every logical endpoint is theoretically the best way to achieve this, but in practice this results in a maintenance nightmare.

with events, as their routing is managed by the NSB framework, as far as I understand.

It depends on the queuing technology (transport) being used, but that’s not the view you should have on it.

You send messages from one handler to another handler. Because handlers are inside endpoints, we usually say endpoints send messages to other endpoints. But since HandlerA could send a message to HandlerB and both could be inside the same endpoint, it gets kind of fuzzy.

But since handlers are inside endpoints and endpoints and queues have a 1-on-1 relationship, we still talk about it this way.

Commands

One endpoint is responsible for and the owner of a command. Other endpoints send the command to this specific endpoint that is the owner. So if the Sales endpoint is responsible for AcceptOrder command, in routing this looks like this:

var transport = endpointConfiguration.UseTransport<MyTransport>();

var routing = transport.Routing();
routing.RouteToEndpoint(assembly: typeof(AcceptOrder).Assembly, destination: "Sales");

Events

If Sales now publishes the event OrderAccepted, it just publishes it and has no idea who is responding.

If other endpoints are interested in this event, they register themselves as interested using this:

var transport = endpointConfiguration.UseTransport<MyTransport>();

var routing = transport.Routing();
routing.RegisterPublisher(assembly: typeof(OrderAccepted).Assembly, publisherEndpoint: "Sales");

When you’re using a transport that natively supports pub/sub, you don’t have to do the above. RabbitMQ is one of those transports.

Subsets of domain messages

If you have subsets of domain messages, you can do something like the following:

// Register single type
routing.RegisterPublisher(typeof(OrderAccepted), "Sales");
// Register all types inside an assemly
routing.RegisterPublisher(typeof(OrderAccepted).Assembly, "Sales");
// Register all types inside a namespace
routing.RegisterPublisher(typeof(OrderAccepted).Assembly, typeof(OrderAccepted).Namespace, "Sales");
routing.RegisterPublisher(typeof(OrderAccepted).Assembly, "Sales.Events", "Sales");

I hope this helps clarify things a bit. Let me know if you need more information.