Putting your events on a diet • Particular Software

Anybody can write code that will work for a few weeks or months, but what happens when that code is no longer your daily focus and the cobwebs of time start to sneak in? What if it’s someone else’s code? How do you add new features when you need to relearn the entire codebase each time? How can you be sure that making a small change in one corner won’t break something elsewhere?


This is a companion discussion topic for the original entry at https://particular.net/blog/putting-your-events-on-a-diet

I guess both commands and events are async. How do you ensure that commands are received by each service before the sale service had received the event?

It’s always possible for messages in a distributed system to arrive out of order. Check out our blog post You don’t need ordered delivery to find out how that works.

Thanks a lot for the valuable article, I just wondering if we will use the same way with the CQRS pattern, How it will be applied when events from the command side should contain enough data to allow the query side to update its state without querying the command side? to reduce latency and increase consistency

1 Like

The sample in the blog post is already using the CQRS pattern. CQRS just stands for Command Query Responsibility Segregation. By having commands like StoreShippingAddressForOrder you are already segregating commands from queries.

But in CQRS we have a readable database where we store denormalized data for our user interface. We have to update our read database when something happened in our system. This is called projection. We use events for this, but since our events now do not contain any data, we cannot project the event to the database and we cannot project commands because the command, unlike the event, is something that has not yet happened in the system . MahmoudSamir101 asked you exactly this.

We use the term event in publish/subscribe where a publisher usually doesn’t care at if and how many subscribers exist and we often talk about business events. These often represent state changes like “OrderShipped”.

IMHO we’re not really talking about an business event but more a infrastructure notification (TableXUpdated) which is what could happen when you update a database. The business does not care at all on how data is written.

In your projection example, everything related to the models that are modified is tightly coupled schema-wise around such a notification. You can always query the source database with an ID + version that is in such a message but I agree that performance wise you could use fat notifications to skip the additional read IO from the database that is required for the projection to gets its data.

For me the most important aspect to not go for fat messages is that you leak your database schema in your messages which means that you cannot easily change your database schema and if you do, you likely need to version your message against your database schema to ensure inflight messages against database V1 can still be de-serialized once the database is on V2 with a breaking schema change.

My default would be to not use fat messages to prevent premature optimizations at the cost of unnecessary maintenance complexity unless there is evidence to use fat messages for performance reasons in a few dedicated places.

– Ramon

Another thing to keep in mind is that the article is primarily talking about events that are broadcast between services. You want to keep these skinny so you don’t accidentally create coupling between those services, which are supposed to be autonomous and loosely coupled.

You can use a different class of events within a service, such as to repopulate a read model owned by that service, and those events can be as fat as you want. Since it stays within a logical service, the cohesion should be high, so if you need to make a change, you can usually accomplish that all within one solution, and there’s no risk of that causing a problem for other services.

With NServiceBus, you can even take advantage of polymorphic message dispatch, so you have events like this.

public class OrderPlaced : IEvent
{
  public string OrderId { get; set; }
}

public class OrderPlacedWithData : OrderPlaced
{
  // Inherits OrderId
  // Add whatever other properties you want
}

And then when you go to publish the event…

context.Publish(new OrderPlacedWithData
{
  OrderId = "..."
  OtherProperty = "..."
  OtherProperty2 = "..."
});

Now put OrderPlaced in an OrderService.Contracts assembly, and ship that as a NuGet package to other services. Put OrderPlacedWithData in the OrderService.InternalMessages assembly, and don’t distribute that - keep it only within the OrderService.

Now, another endpoint within the logical OrderService can subscribe to OrderPlacedWithData and get all the data. An outside endpoint can subscribe to OrderPlaced (it’s the only type it has access to) and in its message handler it will only get the OrderId.

The only issue with polymorphic messages is that the actual message on the wire contains the full message unencrypted. So the data is actually leaving the boundary but if you only have c# consumers and they only use the contract types then they don’t have immediate access to it.

Only do this when exchanging messages between components that have the same data owner.

The answer is the smaller event, with all of the unnecessary coupling removed

yes, but not always. The event should be as small as possible, but not trimmed. So the event should be self-sufficient and doesn’t starve for information

What does that mean? When is a message trimmed? When is it self-sufficient? And when doesn’t it starve for more information?

I would say it’s self sufficient and doesn’t need more information, if all the information is already at the receiver side?