What are the future plans for Azure Servicebus Transport with regards to message polymorphism?

With the topology changes of NServiceBus.Transport.AzureServiceBus version 5, it seems that there are a couple of features documented in the main NServiceBus section that are no longer fully supported. Specifically Polymorphic routing (https://docs.particular.net/nservicebus/messaging/dynamic-dispatch-and-routing), with or without Interfaces as messages (https://docs.particular.net/nservicebus/messaging/messages-as-interfaces).

In the previous topology, these features worked out of the box, no additional configuration needed, simply by creating rules filtering on the NServiceBus.EnclosedMessageTypes header. In contrast, the new topology will not work with these features out of the box, requiring manual configuration on the endpoint and/or on the Azure Service Bus itself. Any misconfiguration might cause message loss.

What’s more. If I’m not mistaken, it’s not possible to replicate the previous behaviour without some serious modification of the documented topologies. To elaborate, I’ll paraphrase a few sections of the documentation for the Topology, and provide reasons for why I think they are unsatisfactory. If my reasoning contains any mistakes, please let me know.

Interface based inheritance

In the section Interface based inheritance in the Topology documentation (https://docs.particular.net/transports/azure-service-bus/topology#subscription-rule-matching-interface-based-inheritance) are the first notes on implementing these features.

Documentation says

There are a couple of suggestions on how to implement the topology for the following messages:

namespace Shipping;

interface IOrderAccepted : IEvent { }
interface IOrderStatusChanged : IEvent { }

class OrderAccepted : IOrderAccepted, IOrderStatusChanged { }
class OrderDeclined : IOrderAccepted, IOrderStatusChanged { }

An endpoint SubscriberA subscribing to IHandleMessages<IOrderStatusChanged> could configure like so:

topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderAccepted");
topology.SubscribeTo<IOrderStatusChanged>("Shipping.OrderDeclined");

My Interpretation

This would require the endpoint SubscriberA to need to know (and keep up to date!) about all different implementers of IOrderStatusChanged. In a large decentralised system, this might be hard to maintain. In our design, these interfaces cross domain boundaries, being consumed by entirely different departments. Furthermore, each consuming endpoint would need to maintain this list on their own. Missing a type in this list would mean losing that message.

Documentation says

So instead, we can turn it around, and tell the publishing endpoint to do something different:

topology.PublishTo<OrderAccepted>("Shipping.IOrderStatusChanged");
topology.PublishTo<OrderDeclined>("Shipping.IOrderStatusChanged");

Now SubscriberA can simply use the default behaviour to subscribe to IOrderStatusChanged and everything works.

My Interpretation

With this, how would consumption for IHandleMessages<IOrderAccepted> now look like? By default, this would consume the topic IOrderAccepted. But nothing is publishing to those topics.

The documentation is not very explicit about it, but my conclusion is that this suggestion would only work if the handlers for IHandleMessages<IOrderAccepted> and IHandleMessages<IOrderStatusChanged live in the same endpoint. And this would only work as long as all messages implement both these interfaces. A messagetype implementing just IOrderAccepted would need different special treatment.

This seems like a weird requirement to us. The idea we figured was behind interface based inheritance was that you can apply specific business concerns to the same message in a mix-in way. Logically, each of these interfaces would be a different concern. Think “Main order fullfillment”, “newsletter signup”, “account creation”, and so forth. These different concerns would live in separate endpoints.

Full Multiplexing with filters

The next section with guidance on implementing a system like this is a bit further down. The section Multiplexed Derived Events repeats the previous parts, without providing new information, but Filtering within a Multiplexed Topic (https://docs.particular.net/transports/azure-service-bus/topology#subscription-rule-matching-advanced-multiplexing-strategies-filtering-within-a-multiplexed-topic) adds filtering.

Documentation says

The guidance boils down to: publish every related message to a single topic, and filter based on endpoint’s needs to get the messages you want. Preferably using a CorrelationFilter for performance reasons.

My Interpretation

I’m not overly familiar with CorrelationFilters, but based on the MSDN docs (https://learn.microsoft.com/en-us/azure/service-bus-messaging/topic-filters#correlation-filters) it seems to me that these matching rules only perform full string equality checks on properties. We can’t search through the NServiceBus.EnclosedMessageTypes property without using a SqlFilter. We could do that, but then we’re back to the old Topology.

We might be able to come up with an alternative way of creating properties for individual enclosed message types and creating CorrelationFilters on those, but either way, this would be something that a consumer of NServiceBus would have to figure out and implement themselves. Both setting these values in properties of outgoing messages, and more importantly provisioning the required filters in Azure.

Installers

There’s also an issue with installer support. Before, we could rely on installers to provision the entire required topology without our intervention. We run the installers in a separate, more highly privileged step in our deploy pipeline. If we were to have to manually provision certain aspects of the topology, this would introduce additional complexity, maintenance and room for error.

The Azure Service Bus transport configuration also does not allow us to extend the topology installation to provision these filters programmatically.

I have to acknowledge that there is a currently open improvement feature on the github (prompted by an earlier question of mine on StackOverflow) that proposes to add installer support for correlation filters.

So, what’s the question?

The documentation suggests that these features (interface based inheritance and routing based on inheritance) are supported by NServiceBus, but it seems like the new Azure Service Bus transport topology only supports this by downgrading the experience (requiring manual configuration and infrastructure changes outside installer support) without any meaningful gain (implementing a working solution results in more or less the same topology as the old single topic topology).

We are interested to know if Particular recognises the same gap we observe between documented features and their actual support via the Azure Service Bus Transport, and if so - what are the plans to bridge this gap?

Here are some solutions we can identify, in order of descending preference:

1. Instead of phasing out the “bundle” topology - support both topologies long-term and let consumers choose their desired topology.

Inheritance is supported out of the box in the “bundle” topology. No additional configuration or infra needed. Our understanding of the MS-documented limits leads us to believe that in order to hit said limits an organisation would have to have an excessively high amount of service bus entities and/or traffic. With near-certainty, our own organisation is unlikely to come anywhere near any of the reported limits in the foreseeable future.

We have yet to encounter any noticeable performance or latency issues associated with the SQL subscription rules. Granted, our organisation does not yet have a massive amount of subscriptions or filters. But even so, since messaging systems are more geared towards eventual consistency guarantees than towards speed of execution, we would like to raise the doubt whether any but the most complex systems is likely to yield unacceptable performance or latency observed on the overall system.

We find it may be highly desirable for many organisations of varying sizes, ours included, to keep this option available long-term.

2. Introduce an additional / alternative topic-per-event topology which provides the inheritance feature out of the box.

As we understand from the docs, the topology change was introduced aiming to satisfy additional requirements / provide additional benefits over the “bundle” topology: (A) Reduce filtering overhead, boosting performance and scalability. (B) Mitigate the risk of hitting Azure Service Bus limits. (C) Reduce failure domain size in case of hitting a topic size quota. As a result, the current topology was introduced, which consists of a single topic per the most concrete event type. However, this is not the only topology that can satisfy the above requirements. Trying to solve the puzzle of satisfying said requirements while “natively” supporting inheritance, we have experimented with a few alternative topologies, some of which were concluded to be quite adequate. If Particular is interested, we are more than willing to share our ideas.

If a suitable topology candidate is found, Particular may wish to consider here as well whether to provide the new topology as an evolution of the current one (aiming to replace it in a future version), or as a configurable option allowing organisations to favour either native inheritance support or topology simplicity.

3. Enhance extensibility points, and let consumers implement their own topology.

Currently, it’s not possible to extend the TopicTopology class due to internal abstract methods. While a lot of topology setup could be performed by creating a Feature modifying the topology configuration, installer support is limited because the transport keeps the ServiceBusManagementClient internal.

By opening up the topology types for additional extensibility, Particular could let consumers implement their own home-brew solutions to such problems.

While this solution could possibly work (and may be a nice improvement regardless), we naturally consider this the least desirable solution. While it allows organisations to add inheritance support, it does not provide it out of the box, and would indicate that Particular is willing to drop native support for this feature for the Azure Service Bus transport going forward. We can only hope that Particular would be willing to invest the effort to re-incorporate support for this feature into this prominent transport. We very much agree with the benefits of resilience, extensibility and maintainability as documented in dynamic dispatch and polymorphic routing, and all in all we find these features quite a pillar differentiating NServiceBus from competing offerings.

4. Another solution we haven’t considered?

:slightly_smiling_face:

Hi ,

Thanks for taking the time to write this up. There are some good observations in there. I’ll try to clarify where the behavior differs and why.

Polymorphism and the topology change

You are right that things behave differently compared to the single-topic topology.

Previously, polymorphic routing worked out of the box because everything went through one topic and SQL filters evaluated the NServiceBus.EnclosedMessageTypes header. That effectively pushed inheritance handling into the broker.

With topic-per-event, that implicit behavior is gone. Each concrete event is published to its own topic, and the broker has no understanding of CLR type hierarchies. Because of that, inheritance needs to be expressed explicitly somewhere, whether that is on the publisher, subscriber, or via infrastructure.

That is not so much a removal of the capability, but a shift in where that responsibility lives.

What still works

One important distinction: polymorphism on the handling side still works exactly the same.

If a message reaches an endpoint and can be deserialized, handlers like IHandleMessages<IOrderStatusChanged> will still be invoked based on the enclosed message types. That part has not changed.

The difference is purely in routing, not in dispatch.

What changes in practice

Instead of a single implicit mechanism, you now have a few different ways to model this depending on your constraints:

  • explicit mapping on the publisher or subscriber side
  • grouping related events into shared topics where it makes sense
  • forwarding into abstraction topics for stable contracts
  • or more customized topology approaches for specific scenarios

Each of these comes with different tradeoffs around complexity, isolation, and operational control.

So rather than one default that tries to cover everything, the system gives you more flexibility in how you shape those tradeoffs but by default comes with an approach that follows the practices of having a single topic per event and is scalable.

Why we moved away from the bundle topology

The bundle topology made things convenient, but it came with tradeoffs that show up as systems grow:

  • SQL filters in the hot path
  • growing numbers of rules and subscriptions
  • a shared topic becoming a central pressure point

In our intensive testing with a broad set of different loads and volumes, this showed a fairly clear pattern: performance remains fine for a while, then degrades non-linearly once certain thresholds are crossed. A single topic effectively behaves like a shared log structure, and once it is under enough pressure, it becomes a bottleneck for the whole system.

The topic-per-event approach avoids that by distributing load and isolating failure domains, at the cost of making some routing decisions explicit.

On your concerns

A few quick clarifications:

  • Needing to know concrete types is not always required. That depends on where you place the mapping. Publisher-side multiplexing or forwarding can centralize that concern if needed.
  • Using filters is still possible, but it is a conscious move back toward grouped topics, with the same tradeoffs as before.
  • Installer support today focuses on the core topology. More advanced setups like filters or forwarding are typically handled via deployment or IaC.

On limits

You mentioned that hitting Azure Service Bus limits would require very large scale.

That is often true initially. The challenge is that the degradation is gradual and then suddenly very noticeable. The design here is less about absolute limits and more about avoiding those “fall over” characteristics as systems evolve because they always do, and then things happen and the context is no longer in people’s minds.

On topology options

Providing multiple topologies or keeping the bundle topology long-term is something we have looked at. The difficulty is that every additional option increases the surface area for support, documentation, and long-term maintenance.

That does not mean it is off the table, but it is not a free choice either.

FYI I have recently answered a similar question on Stackoverflow

we have experimented with a few alternative topologies, some of which were concluded to be quite adequate. If Particular is interested, we are more than willing to share our ideas.

Happy to listen and learn! If you want to go deeper, feel free to reach out and we can set up a call: daniel dot marbach at particular dot net.

Best regards,
Daniel

First of all thanks for the fast response.

Quick question before I respond to the rest of the rest of the post. Do you know how much of the performance degradation of the old single-bundle topology is due to the SQL filters specifically, and how much was due to the filtering in general?

I might have a few ideas on implementing the same filters as the old topology using Correlation filters, and was wondering whether that idea is worth pursuing at all?

Hi

From what we observed, the main issue was not just SQL filters specifically, but the combination of a shared topic and per-message evaluation across multiple subscriptions.

SQL filters do add overhead, but they are not the whole story. The more fundamental scaling characteristic comes from the shared topic itself acting as a central log that all messages pass through.

What we saw in practice is that this scales fine up to a point, but not linearly. As the number of subscriptions and rules grows, degradation becomes noticeable earlier than one might expect, even at relatively modest scales.

After that, latency and throughput start to degrade in a way that is difficult to predict and worsens quickly as load increases. In those situations, the topic can build up backlog and eventually run into quota limits, which in turn impacts all publishers since the shared topic becomes a central point of pressure.

Regarding correlation filters:

It is true that Azure Service Bus allows significantly more correlation filters than SQL filters. However, that does not translate into a proportional performance improvement in practice.

Correlation filters are more efficient and can reduce CPU and memory overhead. However, they do not change the underlying model of a shared topic with multiple subscriptions evaluating each message.

In practice, the bottleneck we observed was largely driven by the IO and throughput characteristics of that shared topic under load. Switching from SQL to correlation filters can delay when issues appear, but it does not remove the non-linear scaling behavior.

So whether it is worth pursuing depends on your goals:

  • If you are optimizing for simplicity and expect to stay within comfortable limits, it can be a reasonable tradeoff.
  • If the goal is to avoid those scaling characteristics altogether, it is usually better to move away from the shared-topic pattern.

Happy to take a look at your idea if you want to sketch it out.

For context, we have explored a number of similar approaches, including “splatting out” the enclosed message types into individual broker properties so they can be matched using correlation filters.

That can work functionally, but in practice it still ends up with a shared topic and multiple subscriptions evaluating each message. While correlation filters help reduce some overhead compared to SQL filters, we did not see it fundamentally change the scaling characteristics. It mainly shifts where the cost shows up rather than removing it.

FYI, some of the harness we used is available here: GitHub - danielmarbach/AzureServiceBusTopologyComparison · GitHub. It includes Helm charts that allow you to simulate different scales.

We have spent quite some time exploring this space, including experiments along these lines and discussions with Microsoft. Based on that, these approaches tend to be fairly time-consuming and, in our experience, do not fundamentally change the scaling characteristics.

That said, systems and constraints evolve, and we are always open to new insights. If you do explore this further and find something interesting, we would be pleased to hear about it.

Best regards,
Daniel

This was more or less the direction I was thinking for Correlation filters specifically. I guess that wouldn’t really be a direction to take.


I think it would be valuable to have a call about some of the other ideas that we have. Both to see if there’s anything Particular can do to support that, and to see if we missed anything that might cause problems down the line. A colleague of mine will be reaching out by mail to try and schedule something.