Using Custom Message Auditing Solution


Image modified from here.

If we wanted to replace ServiceControl auditing with our own message auditing solution what would be the best place in the NSB middleware to replace this behavior? Optimally this would give us the same guarantees as the ServiceControl auditing, with consistency with the actual outgoing messages and processing of the incoming message. In other words, we don’t have audit entries saying a message was successfully processed when it was actually rolled back.

We are already overloading our transport, Azure Service Bus, at peak times, so we do not want to send audit messages to an ASB queue, doubling the load. We would instead send to a Kafka topic in our Confluent subscription.

Another thought is, with OpenTelemetry, the only audit info we really need is the outgoing messages with bodies, because the processing of the incoming messages is already available in the distributed tracing information. If outgoing message bodies is all we are missing, would it be better to leave NSB audit disabled and enable SQL Server CDC with debezium on our NSB Outbox table to get a stream of outgoing messages to Kafka as alluded to here?

Or maybe there are some other thoughts on auditing with the upcoming NSB & Kafka integration?

About us:

  • In Azure
  • Using ASB transport with ASB Premium, tried partitioned ASB, but we have message sizes over 1mb currently, so ASB partitioning is not compatible
  • Will soon be using NSB SQL Server Persistence (made a mistake initially choosing Cosmos for NSB persistence even though all business data is in Azure SQL Manage Instance DBs)
  • On NSB v8, but moving to v9+ at some point
  • NSB apps with handlers are running in AKS, scaling with KEDA

I am assuming that implementing a Custom audit action might be the recommendation, but wasn’t sure if that is best if we want to totally replace the pipeline. It seemed most of the examples were related to modifying the behavior but still sending to an audit queue on the NSB-specified transport.

We do have an outgoing message auditing implementation that hooks into the IOutgoingLogicalMessageContext to send the outgoing messages to a Kafka topic, but am interested in what the deficiencies of this would be as well as how the solution can be improved.

Thanks!

Hi Ben,

How did you come to that conclusion? Partitioned premium supports large message sizes up to 100 MB. Did you by any chance run into this problem?

Daniel

Thanks for the reply, @danielmarbach. This was determined by many long hours with MS Support and communicating with the ASB team through GitHub issues.

Partitioned entities limitations - Premium Partitioning Does Not Support Large Messages · Issue #121290 · MicrosoftDocs/azure-docs (github.com)

Here is the response from the dev:

This is a feature gap in partitioned namespaces. A fix is checked in last month. It should be deployed to production by the end of this month.

Even after this fix, batch size cannot exceed 1 MB. A single message can be up to 100 MB. But a batch cannot be larger than 1 MB.

We had been checking it for a while after the message was posted, but gave up checking after a couple months.

Cannot write large messages (>1MB) to Premium Partitioned Namespace · Issue #703 · Azure/azure-service-bus (github.com)

It may be fixed now, but we have not received any notification from MS that it has been resolved.

I commented on the linked issues. Something is still fishy. I wasn’t aware of this problem. I learn something new every day :smiley:

1 Like

For those reading this conversation above. The Azure Service Bus team is rolling out large message support for partitioned namespaces. The documentation got merged prematurely, indicating large message support is already available for partitioned namespaces. It will still take though a few weeks until all supported Azure regions will have this feature.

1 Like