Asp.Net Command Api Controller - To queue or to not queue commands?

We have a web based platform that a user enters quite a bit of transactional data. Was curious about what others are doing when it comes to Asp.Net Controllers - do people process the command in the controller and then use NServiceBus to raise events or are you dropping the command immediately in the queue and letting it be handled by NServiceBus

We currently queue the command and use NServiceBus and then use Signalr to update the front end.

But I think we could have a better user experience if we processed the command synchronously (for most situations) and then queued events from there.

I think there’s some scenarios (typically anything involving a process manager) where queueing the commands makes sense but that would be the exception, not the rule.

Was thinking our Command Api Controllers would essentially give one of the following responses:

  1. Successful
  2. Unsuccessful (various flavours here)
  3. Queued

Would provide a convention that the front end could then handle appropriately.

The challenge becomes what if it fails… The most concerning one being that we use in memory domain event handlers and have logic to handle retries. My assumption would be to use Mediatr and add retry behavior to it.

All of this sounds good but I feel like it might make for a cumbersome dev experience or I might be over complicating it…

Curious what others are doing?

I was having similar thoughts before. Especially for all the small data changes a user is doing, e.g. some of the minor CRUD operations.

We currently only send commands in the controller or in our newest part which is Blazor Server Side directly in the blazor code (equivalent to the controller). We add a guid JobId that the client generates on all these commands and then notify finished job ids via NServiceBus events and SignalR to get them back to the client.

For blazor we also have a context grouping via a guid. This allows us to even update all users that are currently on the same screen. In this case the event pushed back contains more data, so that we can update UIs to a current state without the need to fetch data from the database at all. This makes it a little bit more complex for these UIs, but it makes for a really nice user experience.

I think as soon as web servers are load balanced (which is not necessary in our system yet. we have a hot standby in case of a crash) it’s getting a bit more complicated: for SignalR you’ll need to use sticky sessions, but you’ll also need to find a way so that the correct physical endpoint processes the event from NServiceBus. I never got into this as of now, but eventually I’ll have to deal with it.

Exactly, for simple operations, it feels like the optimal approach is to handle them immediately and then leverage NServiceBus to handle any async operations such as raising integration events. Like I said, we do have more complex processes where we would want to queue as well.

I think it’s worth a poc as we are going to have to support external apis for other clients where we can’t leverage the same SignalR feedback loop

Haven’t really seen anyone else taking this approach so I’m just questioning the validity as well as the roi.

The route both of you have mentioned is the route we have taken as well. We originally started with always queuing commands for any data change operation but that was a ton of overhead without a ton of benefits. We are more strategic with when we actually need to queue a command vs just handling the update within the API controller