I love the unit testing available in NServiceBus, but now I’d like to do some integration testing - I know that such a thing exists in something of an unsupported way, but what I’d like to do is maybe something closer to a “system test.”
My scenario is that I’d like to use docker-compose to instantiate multiple services/endpoints and the actual messaging transport (in my case RabbitMQ) and be able to subscribe to one or more events to “assert” against.
This feels like I almost want to be able to create an “anonymous” saga or something as part of my test - which I know isn’t a thing - but I’m curious if anyone has achieved something similar?
What is it that you’re trying to accomplish with the integration tests? Usually, with tests, you’re expecting some kind of behavior and automate the test to repeat the test.
Usually automating these kinds of tests doesn’t add much, because the teams developing the software (for NServiceBus, Docker, RabbitMQ, SQL Server, etc) should do proper testing on their product if it works. What we do want to test is see if we’ve set up everything correctly. If some endpoint correctly subscribes to another endpoint its messages. We do this one time and with a proper continuous deployment environment don’t worry about this anymore.
If we do want to test this stuff, it’s extremely hard to do it automated. There are so many moving parts that even the slightest thing can go wrong and one or more tests will fail. If the transport is starting up and we’re expecting a message to arrive in 10 seconds, it can fail because it’s only delivered after 11 seconds.
All in all we don’t have anything to automatically perform these kinds of tests with NServiceBus.
Although it might not be very helpful, does it answer your question?
I agree that testing infrastructure isn’t a good use of time. What I am trying to do is, basically, automate tests that we might otherwise run manually. I guess you could call them smoke tests? But basically I’m composing multiple parts of our system with the actual transport and I’d like to be able to assert that things are nominally working, especially along paths where things should working prior to a production deploy. Obviously where things are expected to take more than that, those aren’t tests that would be appropriate to write.
After talking with some folks at Particular, I think using the AcceptanceTesting framework will do what I need - at first I thought that it only supported an in-memory transport, but since it can run using RabbitMQ as well, this should give me what I need. That said, I still need to implement the tests, but the examples I was pointed to (they have them in the repos for each transport type) look like good guides.