Determining what handlers comprise an Autonomous Component?

I understand from Udi’s course about Autonomous Components that they should be fairly granular in terms of the handlers they contain, preferably aligned to a single use case. I’m trying to determine if this means one handler only per AC or if there are exceptions to the rule.

Let’s say we have a use case for Sales service called ManageCartItems that implies adding and removing items from a cart. We’ll assume we don’t have Add and Remove as individual use cases as being able to add without remove would be less than functional to the business.

I see 3 possible development structures that could be used:

  1. One AC called Sales.ManageCartItems. Contains handlers for AddItemToCart and RemoveItemFromCart
  2. Two ACs, one called Sales.ManageCartItems.AddItemToCart and the other Sales.ManageCartItems.RemoveItemFromCart. Each contains only their respective single handler.
  3. Two ACs, one called Sales.AddItemToCart and the other Sales.RemoveItemFromCart. Each contains only their respective single handler.

I would think that adding and removing items from a cart would use the same functionality to achieve their goals. Same data structure too. So any requirements change for managing items likely would affect both handlers, or the code both consume.

With this in mind is no 1 an acceptable approach?

I guess I should also ask, is this style of naming ACs even correct? It seemed logical to have the use case name somewhere.

Hi Mark,

I personally like to think in Policy term - in extreme case for everything even with only one handler :slight_smile: - so in your use case could be something like

ManageCartItemsPolicy

with two handlers

AddItemToCart and RemoveItemFromCart.

This is the same idea like your first option. If you have logical Policy operating on the same data then it’s simply map to physical Saga implementation. Then conceptually

Policy == Saga == Aggregate == AC -> Component that could be developed/deployed autonomously

Hope this help.
Mike

Hi Mike,

Thanks for that input. I thought it felt like a saga myself too. However, if I may, I have some correlated questions related to the business state these handlers (or saga) consume and update.

I’ll set a basic scene.

I have a Sales Order Aggregate that maintains its OrderItems, each of which has an ItemId. The Order has an OrderId. There would be other related “Ids” and constraints but I’ll leave those out for this example.

The handlers for adding and removing items load the aggregate and calls the appropriate methods to add and remove items from the internal state. This state is persisted to the service’s database, i.e. not specifically owned by the autonomous component.

I’d use another view-model component (same business service) that queries the aggregate state to obtain the order’s ItemIds that comprise the order. Then use server-side composition to obtain related data from other services.

Perhaps also there are ITOPS functions that aggregate from other services in the context of the OrderId and ItemId relationship too. Billing, Fulfillment, etc.

At a fundamental level though, this service’s OrderId and ItemId relationship is available to “all” ACs in the service.

Now the question.

I had an AC with 2 handlers that references some shared (in the service) domain/data library. Now let’s use a Saga that encapsulates the Order aggregate behaviour and state.

My understandings of sagas are:

  1. that they should not obtain business state from outside the saga, and
  2. you cannot query for a saga’s business state from code running outside the saga, i.e. another AC.

This informs me that I should include a list of ItemIds in the Saga state rather than obtain them via some injected repository. That possible and fits with no 1.

However, I am at a loss for how to support another AC (view model composition component for example) obtaining the ItemIds for a given order. How do we solve this when shifting to a saga implementation?

Hi Mark,

If ManageCartItemsPolicy has the simple add/remove functionality then probably Saga as technical solution is overkill, especially that data in the Cart are private for the user so you could use the same data store (model) for write and read. If ManageCartItemsPolicy has some complicated business rules for example

  • when user add item to the Cart and before accept the order the item priced has changed then show this information on UI
  • when user add item to the Cart and before accept the order the item has become unavailable then show this information on UI

then Saga can subscribe for events, do some other things, save own internal state and update separate read store (model) for view composition.

Using Saga with data that must be also use for query always require separate read store (model) and synchronization between them.

The best is to think in Saga concept even when final implementation doesn’t require Saga. When things start to be more complicated it should be easier to migrate from own solution to Saga solution.

Hope this help.
Mike

I guess I’m struggling to leave the one domain to rule them all approach when it comes to behaviour. :slight_smile:

Assuming sagas are the go-to for when there’s some business constraint complexity, is the read model a singular concept for the whole service, or are there specific owned read models per AC in the service for those that have a requirement for a read model? For example, does a viewmodel composition AC have one model that it owns and an AC that supplies Billing info to an ITOPS service also have its own model?

With one single read model there would be one read model (lib) per business service and ACs with read requirements make use of this as needed, noting that a change to the read model potentially impacts all those ACs deployment. Also means calculation functions are kept in the one place.

Separate read models, one for each AC, means those ACs are mostly autonomous, except if they share some calculation functionality that needs to change.

As for the AC containing the saga, I assume the synchronization is eventual? Saga publishes events, some other deployed “thing” updates read model. Are these “things” handlers within the saga AC or a separate ReadModel (poor name) AC?

Service is logical concept. AC is physical solution. AC belongs to Service. If AC uses read model then this read model schema belongs to this AC and Service. Not every AC in the same Service have to use Read Model as physical solution. Unfortunately there is no one good fit/best practice answer how it should be. Everything depends on functionality context.

The same is for Read Model distribution. For example if there is only one system with one database then read model could be separate by database schema but physically there is the same database.

On another hand if there are two systems each with own database then read model could be store in each of them but still logically this data belongs to same Service.

Yet another solution is to have API per Service/AC with Service own database and don’t distribute data at all but then if the API is unavailable both systems won’t work properly. This is classical choosing challenge between Availability vs Consistency and again there is no one good fit answer.

Yes it’s eventual consistency. To whom should belongs functionality of update read model? It’s a good question. Yet again it depends on functionality context. For example when ManageCartItemsPolicy publish event that something was added to Cart then subscribers expect that read model is also updated? If yes then update belongs to ManageCartItemsPolicy otherwise it could be separate AC.

Hope this help.
Mike

That all makes sense Mike, and I appreciate your insights. If I may, I have one last question to end this thread (I promise).

From reading other discussions it’s clear that the saga would need to publish a fat-ish event for the read model handlers to fulfil their needs for their model and ultimately what a user would see. This event is within the service boundary, just (potentially) cross AC boundary so okay to contain fat (volatile) data.

Other services might be interested in these event messages however they should not be subscribing to this volatile data and instead subscribe to thin events with only IDs as much as possible. That’s my understanding anyway.

With that in mind, how do we achieve the requirement of supporting a fat event for read model ACs to subscribe to and a thin event for other services to subscribe to? I imagine for a start there are 2 NuGet packages; one for service-internal events and another for public events. Would we just publish both from the SAGA at the same time?

@Mark_Phillips you can achieve that by using inheritance, e.g.:

class FatEvent : IThinEvent
{
   public int Id{ ... } //this comes from the interface
   //other properties not defined in the interface here
}

You can then distribute to clients outside the service a package that contains the only IThinEvent interface. They subscribe to that. They will receive the entire payload, but will only deserialize to what the interface dictates.

Thoughts?
.m

Oh I wasn’t aware NSB supported subscription by interface. That’s a nice approach. Means my publishers don’t concern themselves with the contract boundary so much. That’s left to the package containing those contracts.

Thanks @mauroservienti

@Mark_Phillips it is also possible to use consumer-driven contracts if required as outline here

Thanks @danielmarbach, that definitely looks like an option.

In addition, I’m also considering using shared, linked contract code for the fat events rather than Nuget packages so I don’t expose them outside the service boundary. Just a thought.

Mark you can ask as many as you want :slight_smile: I think it’s interesting discussion :slight_smile:

When Saga is not responsible for coordinating updating view models then the same thoughts as @mauroservienti and @danielmarbach.

When Saga is responsible I would rather send fat-ish Command(s) for update view model(s) and use message Replay back to Saga. This is inside service boundary so commands are fine and should come from internal contract (nuget package). After Replay(s) back to Saga, publish event comes form public contract (separate nuget package) with ids only. This is outside service boundary so everything is correct.

Hope this help.
Mike

When Saga is responsible I would rather send fat-ish Command(s) for update view model(s)

I could send a request/response to a handler to update a “master data” model and then different (view-focused) ACs could map this data model to their own view as required on-demand, using whatever functional mapping they require within the constraints of what’s available at a data level within the service.

My reasoning for a common (service) data model is that a view (from a single service AC) is not necessarily static for all time, i.e. there are possibly time factors that can affect a projection.

This approach is really just moving the behaviour (process) out of an aggregate (for example) into sagas and leaving the data part intact and available for view generation as required.

Is this also a viable approach or do you see downsides?

Just to clarify what I mean by a view not static for all time here’s an example:

Someone adds items to a cart from a specific menu, only in this case the menu has an operating availability schedule. The view may want to express to the user at checkout that the order cannot be placed at the time requested and may suggest when the order can be placed instead (for perhaps a pre-order). For ASAP orders where no future order datetime is supplied, this information is relative to the point in time the view is requested.

I’ve just watched Udi’s video at https://www.youtube.com/watch?v=fWU8ZK0Dmxs which got me thinking a little more around the context of my last example re adding items to a cart but only from a menu that is currently in operation. That’s fine from a read projection perspective but has implications from a business process one as I’d originally thought at some point I’d need to utilise the same constraint checks within an aggregate or saga.

As Udi explained, the “IF protector” was not really protecting due to possible race conditions. For example, perhaps the seller has to change their menu hours of operation due to staffing problems, and this change may happen the moment after an item is added to the cart. As he further elaborates, I could move this check to the checkout step, but as before the same race condition could occur there too.

I guess where I’m going with this is, should I be concerning myself with these constraint checks in these truly collaborative instances that cause potential race conditions and instead just render views with as current as possible information to drive the UI process, knowing that a race condition is likely to be resolved by the collaborative party in due course anyway?

For example, when an item was shown to be available in a cart or checkout and its menu is subsequently unscheduled post-checkout, should I allow the item to be added to the cart with no IF constraint, pass through the checkout process unhindered by an IF constraint also, be authorised for payment etc, and only at say seller fulfillment do the real check (human, not silicon) resulting in either an OrderAccepted or OrderDeclined?

It’s OK so publishing event using one of the approach mentioned above should be fine.

Unfortunately there is no technical solution to resolve 100% race conditions I think. From the real life example I had couple situations when order process succeed but on the next day I had phone call with information that one of the product is no longer available. This was the business process - inform customer by the phone. They could also sent me an e-mail or do nothing (bad practice).

“IF” statements only increase chances to inform fastest way that something is no longer available but don’t protect in 100%. You have to find business solution for your specific use case.

Mike

It’s OK so publishing event using one of the approach mentioned above should be fine.

That’s good to know.

That makes sense. I guess it helps to ask the right questions of the business, i.e. delve deeper to understand those what-if scenarios, and to make it clear to them that the tech cannot solve all the problems 100% of the time.

Thanks Mike.