Additional Context Around Best Practices

In the NSB Best practices (Best practices • NServiceBus • Particular Docs), there’s mention of two key points 1) limit number of handlers per endpoint and 2) group handlers by SLA

While I get the statements and have seen the sample on how to do it (Generic Host multiple endpoint hosting • NServiceBus Samples • Particular Docs), looking for a little guidance on how others are slicing their endpoints?

For a web-based api, I’m imagining a typical implementation is to have an endpoint for user-triggered handlers and then have a separate endpoint for background processing, which would have an api structure of something like:

  • Web Api project (that registers and hosts the endpoints)
  • User-focused Endpoint project
  • Background Worker Endpoint project
  • Domain project (that both the endpoints use)

Or is typical to go more fine grained? What kind of heuristics are used to guide slicing? If the Api consisted of multiple “features”, would you slice endpoints by feature? selective feature isolation?

Appreciate any guidance around this and get that of course it depends

Hey @Craig_Cox,

Before I really get into it, I don’t understand how Generic Host multiple endpoint hosting • NServiceBus Samples • Particular Docs is related to this, and I generally wouldn’t recommend that. Keep the endpoints in separate processes, as there’s usually not much to gain (and a lot of complexity involved) to try to smash them together in a single process boundary.

Here are a few other things to think about:

  1. A logical endpoint is your unit of scale. So anything that has different scalability requirements beyond the SLA is something to consider moving to a separate endpoint. I once was building a system that did push notifications to millions of devices. That definitely got its own endpoint.
  2. A logical endpoint is the thing that subscribes to events. (Handlers don’t subscribe to events, endpoints do.) So if you want multiple copies of a published event each processed individually, needs to be in separate endpoints.
  3. You could also slice by feature. It can be helpful especially when a system grows large enough that you have endpoints in multiple repos/solutions that are exchanging event contracts via NuGet package rather than sharing source directly. This is much more of an “It Depends” and “Your Mileage May Vary” situation so you don’t have to do it, but if it makes the system easier to reason about, then do it.

That help?

Appreciate the response David! I was on the same page as you… I had not articulated what I was interested in correctly and so when I proceeded to elaborate I ended up answering my own question

My focus was geared around user-based messages that require a fast response being placed at the end of a queue after a flood of system to system messages were received that have a more relaxed response time requirement… If those concerns rely on the same shared domain model, how would you split one service into two.

The concern was more about needing to throttle the system to system messages and thinking about the challenge in those terms - it becomes straightforward for us to throttle the “gateway” service which would ensure the downstream endpoints aren’t flooded

Again thank you for your answer

I think I maybe understand a bit more. You were thinking about having a WebAPI where two endpoints and the domain model code was all hosted in the same single process boundary.

if that’s accurate then what I’m saying is: don’t do that.

Instead:

  1. A WebAPI project that has a send-only endpoint. All it does is takes in HTTP requests and sends messages to backend processes.
  2. A service for the User-focused (quick SLA) messages, hosted in a 2nd process.
  3. A service for the background messages (longer SLA), hosted in a 3rd process.
  4. A domain project containing code that all 3 processes can share.
  5. A messages project containing all the message definitions, that all 3 processes can share.

That makes sense

For 1 and 2, if there’s a 1 to 1 mapping, meaning 3 only receives background messages from other services, what’s your criteria for having 1 and 2 together vs separate. Or would you always have a Web Api as a send-only endpoint and not have it host handlers as well. My head goes to web apis that aren’t high throughput and the overhead of them each having their own resources might be better pooled vs separately resourced. Trying to make sure I understand the trade offs.

I would always have the webapp only send messages so that the webapp can be scaled out independently.

1 Like