End-to-end / Integration testing

Having used NServiceBus for a number of years through several iterations. One of the most glaring oversights within the framework and the biggest cause of headaches when it comes to deploying components to environments, is the lack of an integration testing framework.

I’ve seen a few NServiceBus applications now grow exponentially in the number of components, and one point of contention has been ensuring that components communication paths and interactions remain consistent between releases. We have used unit testing practises throughout each of these projects, and dabbled with using third party End-to-end testing frameworks or Consumer driven contracts, none of which have quite achieved what we wanted to achieve or have been easy to maintain.

My question is, as NServiceBus applications grow in complexity, diverge into multiple components and branch communication and interaction paths. How would NServiceBus intend on testing these applications End-to-end or as they integrate these components changes into larger application suites, made up of hundreds possibly thousands of components?

Hi Gary,

First of all, a huge thanks for directly reaching out to us. As someone who has used NServiceBus and our platform extensively in your projects, we value your opinion and insights. By hearing directly from people like you, it helps to make our products much better. We have introduced features, fixed bugs, improved our platform in countless ways, thanks to our customers like you.

Regarding testing, as you pointed out, we have documentation that tells our users how to unit test handlers and sagas (Testing NServiceBus with fluent style • Testing • Particular Docs) and samples (Unit Testing NServiceBus • Testing Samples • Particular Docs). But it’s somewhat dispersed among the other documentation articles. Testing (both unit testing and integration testing) is good guidance to have, especially in a distributed environment.

In the short term, let me raise this internally with our developer education team that is responsible for creating necessary tutorials and guidance to see if there is something we can address. In the long term, we’ll look into how we can make this easier within the Particular Platform itself.

Can we reach out via email directly to you, so we can set up a time for a call to try and understand your scenarios better?

Looking forward to hearing from you and once again thank you for raising this.
Cheers,
Indu Alagarsamy

Hi There,

I was facing to an issue by creating integrations tests for sagas : How to know when a saga is completed.
At begining I added a call to static action in my sagas when they completed, but it was ugly.
I resolved this issue with a IBehavior implementation :

public class TestSagaBehavior : IBehavior<IInvokeHandlerContext, IInvokeHandlerContext>
{
    public static Action<Saga> OnSagaConmplete { get; set; }


    public async Task Invoke(IInvokeHandlerContext context, Func<IInvokeHandlerContext, Task> next)
    {
        await next(context).ConfigureAwait(false);

        if ((context.MessageHandler.Instance is Saga saga))
        {
            if (saga.Completed)
            {
                OnSagaConmplete?.Invoke(saga);
            }
        }

    }
}


public class TestSagaFeature : Feature
{
    internal TestSagaFeature()
    {
        EnableByDefault();
    }

    protected override void Setup(FeatureConfigurationContext context)
    {
        context.Pipeline.Register<TestSagaRegistration>();
    }
}

public class TestSagaRegistration : RegisterStep
{
    public TestSagaRegistration() : base(
        stepId: "TestSaga",
        behavior: typeof(TestSagaBehavior),
        description: "Test saga behavior to know when a saga is completed.")
    {
    }
}

It’s not perfect, may be I should use an event instead of an action but it meet my requirement without code only for integration test in my sagas.

I wonder if there is better solution to solve this issue.

Regards

Olivier

So you are testing your saga end to end by sending messages to the endpoint hosting the saga and you want to verify that the saga was completed?

What actions do you take when the OnSagaCompleteevent fires? (I assume you invoke some “Asserts”?)

I think the discussion is drifting a little away from the issues I am attempting to address. I am trying to look at introducing testing methodologies to avoid destructive behaviours being introduced into a complicated systems made up of several components. Where the components themselves meet their own acceptance criteria.

For example, If I use a small example to illustrate, taken from the SOA done right workshop GitHub - Particular/Workshop: SOA Done Right. Your finance and sales endpoints together instruct the shipping endpoint when to ship the product to the client.

Untitled%20Diagram%20-%20Copy

However, what I am finding as the project grows larger in size with more components and intricate iteration paths, some seemingly insignificant change can be completely destructive as a whole to the function of the system. for example, if a modification to the sales endpoint was made to no longer trigger the ordersubmittedevent, with unit tests modified to reflect this. The system would no longer ship the product.

Untitled%20Diagram%20-%20Copy%20(2)

This seems like a glaringly obvious expected behaviour in my example, but with larger systems where the messages importance to an interaction path may be lost or misunderstood, it can lead to completely devastating behaviour in the application as a whole. I would like to preform end to end tests which check the integrity of the applications functions as a whole. How do I test my system ships a product when payment succeeded and the order was submitted?

I do stuf like that :

           var waitHandler = new ManualResetEvent(false);
            TestSagaBehavior.OnSagaConmplete = saga =>
            {
                waitHandler.Set();
            };

           /// code to test the saga here 

           var signalReceived = waitHandler.WaitOne(10000);

           Assert.True(signalReceived);

Got it, so you have test code running in your endpoint that does relevant asserts.

This would limit you to only assert on things that happen in a single endpoint, have you seen that as a limitation? ( like the scenario described by @garyamorris )

The way I would test this is to insert something that would “instrument” those endpoint by plugging in to the incoming and outgoing pipelines and lets say sending a message to an “instrumentation” queue when interesting things happen.

I would then write something that would extract that information into something that you can assert on. Is this something that you have considered? does it sound like a good idea? Is this something that you would want us to provide?

Not in my case but we don’t really do E2E tests, it’s much more integration tests and all is loaded in a single process.