RabbitMQ Transport: Subscribe to events by name

Hello. I’m implementing some microservices to use NServiceBus with the RabbitMQ transport.

I have two microservices:

  • MicroserviceA
  • MicroserviceB

MicroserviceA is publishing the event MicroserviceA.SomethingHappened from its own namespace.

MicroserviceB is subscribed to the event MicroserviceB.SomethingHappened from its own namespace.

Because of the difference in namespaces, MicroserviceB’s endpoint is not receiving the MicroserviceA.SomethingHappened event published from MicroserviceA’s endpoint; the event’s type is not a direct match with what MicroserviceB is subscribed to.

Using the RabbitMQ transport in NServiceBus, how can I have MicroserviceB subscribe to a “SomethingHappened” event by name rather than by type? I don’t want the subscriber to care about the event’s namespace.

Thank you.

The way to do this is to share message contracts. Specifically, the publisher of an event is the owner of the message contract used for that event. So in your case, I would expect A to define the message contract.

On top of what Adam said, that is the preferable and easiest option.

Would you mind sharing why you’re defining messages in both endpoints? Since there are use cases in which it might make sense, I’d be interested in understanding more about your scenario


Thanks for the quick response.

Message contracts are new to me; it sounds like a message contract would keep both endpoints up-to-date by sharing the event definition across endpoints/solutions.

I realize that, in my above example, I said that the MicroserviceA.SomethingHappened and MicroserviceB.SomethingHappened events were identical apart from their namespaces, but this won’t always be the case for most of my projects. Let’s say that the MicroserviceA.SomethingHappened event contains a property that isn’t found in the MicroserviceB.SomethingHappened event; maybe MicroserviceB’s handler doesn’t need this property to handle the event, but another microservice–let’s call it MicroserviceC–does. MicroserviceB.SomethingHappened and MicroserviceC.SomethingHappened conain only a subset of the properties found in MicroserviceA.SomethingHappened.

In this case, we have three events defined in separate solutions with slightly different properties. When MicroserviceA publishes its MicroserviceA.SomethingHappened event, I would want MicroserviceB and MicroserviceC to subscribe to a “SomethingHappened” event and deserialize the properties from the published MicroserviceA.SomethingHappened event into the common properties of the MicroserviceB.SomethingHappened and MicroserviceC.SomethingHappened events defined in the separate solutions. I’m hoping that I can use NServiceBus.Transport.RabbitMQ to achieve what I was able to do in RabbitMQ as a standalone:

private void DoInternalSubscription(string eventName)
    var containsKey = _subscriptionsManager.HasSubscriptionsForEvent(eventName);
    if (containsKey)

    if (!_persistentConnection.IsConnected)

    _consumerChannel.QueueBind(queue: _queueName,
                               exchange: _exchange_name,
                               routingKey: eventName);

@mauroservienti I’m defining events in multiple endpoints because each microservice covers a limited scope of business logic; the event defined at the MicroserviceB endpoint will only contain the properties that MicroserviceB needs to carry out its business logic. Since MicroserviceA is the publisher of the event, MicroserviceA.SomethingHappened would encompass all of the properties needed by MicroserviceB and MicroserviceC.

I haven’t delved deep into message contracts, but at a glance, it doesn’t seem to be what I’m looking for. Correct me if I’m wrong.

@johnm I believe some of the problems you’re encountering may fall away if you were to revisit your service boundaries. I talk about that in my Finding your service boundaries talk.

For example, if you were to reconsider your service boundaries and end up with a model where you are not sharing data (only ID’s) between your services, then any “internal” events (published and consumed within a service boundary) can have their contracts defined within the code repo that contains that service, in which case both publisher and subscriber can simply reference the project that defines them, and the burden of message contract sharing falls away.

As Adam points out, you wanna be very careful in sharing data with events. That creates unwanted and uncontrollable coupling as the publisher doesn’t know the subscribers.

On top of that, publishers own the contract, not subscribers. That means that it’s the publisher that decides what gets published and not the other way around. If you feel the need for a subscriber to determine what a publisher needs to publish there might be some issues with your service boundaries. Adam’s talk is an excellent resource on the topic.

As I previously said there are ways to achieve what you want. It requires manipulating the incoming raw message at the receiver side to change the type-related header so that the deserialization step deserializes the message using the receiver type and not the sender one. Take a look at the following example in our documentation: Change/Move Message Type • NServiceBus Samples • Particular Docs

The sample is designed to explain how to evolve a system when types change and not specifically for your use case. However, it can be adapted to that too.

A middle-ground solution to your specific question might be to simply standardize namespaces. If All messages were in the Messages namespace then everything works automatically even if publishers and subscribers don’t share message types/assemblies. That’s because serializers can adapt to different types. For example, a sender could define the following message:

namespace Messages
   public class SomethingHappened
       public string SomeText { get; set; }
       public int ANumber { get; set; }

And a receiver could define this other one:

namespace Messages
   public class SomethingHappened
       public int ANumber { get; set; }

Given that the namespace is the same and the receiver simply omits a property serializers like Json.NET can deserialize the incoming message without any issue.

Again, I’d think about service boundaries first and make sure that you really need to share what you are sharing, and carefully evaluate why there is this need of having subscribers handle different schemas before going down the path of not sharing message assemblies between endpoints.


I’m talking with my team about this, and we’re trying to figure out some fundamentals.

If we limit our events to only holding an ID, we would have to use this ID in a command to retrieve the information that we need to handle an event. Wouldn’t these commands make our event handling synchronous? I agree that coupling is an issue, but if we’re handling thousands of events, wouldn’t we want to have all of the necessary info in the events so that we can handle each of them asynchronously?

In your Amazon example, it makes sense to use commands to give the “Finance” and “Shipping” services their info–ahead of time–before the user finally presses the “Place your order” button; the user is driving this process, so synchronous calls are ok. In our case, we only want to use a single command to start a chain of events that need to be handled asynchronously. Can we do this using only IDs in our events?

Thanks again.

Is there a code sample that demonstrates how to use events with only a single ID property? I’m glad you both brought this to my attention so I can avoid coupling. This is the first time I’ve been told to do it this way.

Adding more data besides ID’s and timestamps would convert these messages into fat messages. That is a performance optimization as now when receiving such a message you do not need to query data and reduce storage IO.

However, the receiver is then considered to be part of the same service boundary. Meaning, publisher and consumers are part of the same boundary.

Such an optimization is allowed but we advise not to prematurely do such optimizations. Imagine you want to modify the storage schema, now you also need to update your message schema which you basically cannot as messages could still be in flight. Meaning, you need to allow for a live migration by adding a 2nd message contract.

A consumer that isn’t part of the boundary does not own the data thus should not receive and not be able to query it as that would create coupling between different boundaries.

I’m not entirely sure what you mean with your synchronous/asynchronous comment. Fat or slim messages will not limit the amount of concurrency/parallelism. It’s only that fat messages COULD reduce query IO but that is the only optimization.

In a scenario where for example previously you had a chain of tasks that now has been split into isolated tasks based on messages, fat messages are just fine. That seems like a system that has adopted messaging for the many benefits it brings. I would worry less about getting boundaries right but if you are capable of identifying coupling in messages, storage, API’s then at a later moment you could ask yourself if that coupling is wanted or not and help to define the boundaries in your system and then plan to split or merge into better structured service boundaries.

Can you clarify why they end up being synchronous?

Yes, I created a full sample available on GitHub at https://github.com/mauroservienti/all-our-aggregates-are-wrong-demos. Its companion talk is All our aggregates are wrong

Thanks for pushing me in the right direction.

If a thin event is consumed by a microservice whose database doesn’t already have the info that it needs to handle the event, the microservice would have to use the ID to make a request for that info from some context outside its own (bad practice, because it destroys autonomy). My team and I understood this to be synchronous because the handler would be waiting for a response before continuing with the rest of its logic.
We now understand that a microservice’s database needs to be populated with the necessary data before the thin event is published to it.
From now on, we will aim to do just this, but are there situations where fat messaging is unavoidable?

That’s correct. I like to call that process ViewModel decomposition. I wrote a blog post about it in the past: The fine art of dismantling
On my blog, there is an entire series of articles about ViewModel composition: ViewModel Composition

There are plenty. A common one is within a service boundary. An HTTP API receives a request and needs to kick off a process. Doing so will send a message to a backend endpoint in the same service boundary. In this case, the two components are allowed to exchange data and use “fat messages.”

Another one is, for example, when you need to integrate with systems whose boundaries do not match the ones of the source system. In that case, you’re entering a sort of data synchronization territory in which you can use messages to exchange data.