We’re happy to announce the release candidate of NServiceBus for .NET Core, allowing you to run high-performance endpoints on both Windows and Linux platforms. The release candidate version is available with a go-live license and is fully supported for production use.
.NET Core support is part of NServiceBus Version 7. See the full list of changes and guidance on how to upgrade in our upgrade guide.
We recommend the following official resources for more information about .NET Core:
Am I correct to assume that the “wire compatibility” remains in tact allowing end points running NSB 6 ans NSB 7 to communicate? That would seem to also imply that a mixture of .NET Core and .NET Framework end points are supported in a similar manner. Is that correct?
Full disclosure… I am just BEGINNING to research what it will take to move my team’s largest web client from MVC to ASP Core 2.x. I don’t know what I don’t know (yet). In that light, I want to also thank you for including links to that supplemental guidance. Very helpful!
I think that’s going to make our reliance on MSMQ within our existing NSB 6-based endpoints the largest technical challenge.
My intention is to move individual services to .NET Core as opposed to a single massive upgrade cycle for all of our existing endpoints. If there are any emerging best practices in this regard, please share them.
My current thinking is that my Core-based endpoints will need to move to RabbitMQ and that I will need to develop my own integration endpoints to bridge the new Rabbit-based bus with the legacy MSMQ-based bus. Basically, the same approach I would take if I was integrating with a 3rd party system written in a different language. Does that sound right?
I think that’s going to make our reliance on MSMQ within our existing NSB 6-based endpoints the largest technical challenge.
Can you elaborate a bit why you think it is a challenge? You can pull in the MSMQ transport that is compatible with NServiceBus V7. As far as .NET Core and .NET Framework is concerned they can co-exist on the same machine. Your MSMQ endpoints need to run on Windows obviously. For the .NET Core endpoints you can freely choose to run them on Linux or keep them on Windows side-by-side.
I’m not sure why you think you need to Switch to Rabbit.
Good questions. Happy to share any details that may be helpful. My concerns
could be off the mark. As I mentioned, I am still in the early stages of
grasping the implications of deciding to move toward Linux and the .NET
Core.
I have an existing MVC web client. It communicates with maybe a dozen
microservices via commands and event messages. Today, they are all on
Windows and use MSMQ for their transport. We also use the SignalR backplane
to push data to the browser.
From a business perspective, it would be ideal for the web client to be the
first end point moved to Linux/Core/NSB 7. I am accustomed to using MSMQ in
a fire and forget manner. I am also under the impression that some of that
behavior is the result of having a local instance of MSMQ on the web server
and distributed MSDTC transactions.
So, if I move the web client to Linux/ASP Core/NSB 7 and it needs to
send/receive commands or publish/subscribe to events that owned by one of
those 12 microservices…
I can send commands to a Microservice over HTTP instead of MSMQ. No
fire and forget. No transactional consistency. Right?
How does my web client receive information from the bus in this
scenario? This is the part of the process that is least clear to me right
now.
I believe I read that SignalR isn’t quite ready for ASP Core, yet.
But, I’m honestly not too concerned about this detail. If the web client
can receive the data via messages, I can find an appropriate mechanism for
pushing it to the browser with or without SignalR.
Hopefully that helps to clarify what I am contemplating. My intention is to
perform some experiments and POC’s before attempting to actually moving any
of our production systems to Linux/Core/NSB 7. I am just trying to pin down
the details that should be represented in those experiments and POC’s.
Hi Justin,
Sorry for the late reply. I was at the MVP Summit in Seattle last week.
MSMQ can escalate to DTC transactions if the handler uses another ressource that supports DTC. So for example if you use SQL Server with EF or ADO.NET inside the handler the transaction used by MSMQ can escalate. Or you currently relying on DTC transactions? If not needed you could already today lower the transaction mode to make sure no escalation happens.
Ok now I understand why you are thinking about switching to RabbitMQ. If the webclient currently receives messages and has handlers that invoke SignalR hubs ideally it would be able to listen to messages arriving inside a queue. But I’m questioning a bit the move to RabbitMQ here. For me it sounds like the primary driver to switch to RabbitMQ is “we want to use Linux”. I wonder if this is justification enough to go with the operational risk that comes with it. Your company knows how to use MSMQ, MSMQ is rock solid and seems to working fine for you. Is switching to Linux really giving you so much benefit? I’m pretty sure the licensing costs you might safe would be spent in orders of magnitude on the investments in ramping up with operating linux OS, RabbitMQ etc.
Can you elaborate more your reasons why you think it is desirable to switch to linux? After that discussion I’m happy to dive more into possible solutions
My decision to move toward Linux represents an effort to react to decisions that have already been made within my company’s product development teams. Historically, we have focused on a Microsoft-centric stack, a tiered/SOA architecture, and a traditional enterprise delivery model. We’re intentionally moving toward Linux, Microservices, and Docker containers.
My responsibilities are focused on aligning our cloud hosting services with the product-level changes that this decision will ultimately entail. To be clear, no one is forcing me to change my stack. Rather, I am motivated to do so in order to help acquire and maintain the knowledge needed to host these technologies in an optimal manner within our managed, private cloud. Hopefully, I’ll learn some useful lesson along the way. I expect SOME pain… but hopefully that stays management, too!
Under these circumstances I would suggest you the following:
Check your codes assumption around DTC and transactionality. If you don’t need DTC transactions I think it is probably best if you either Switch to RabbitMQ or Sql Transport with your endpoints. Considering your product development teams made the decision to move to linux, microservices and docker I’d raise the queuing question to them and see that you align with them. With NServiceBus in place the switch to another transport even in your existing transport should be simpler (again if you don’t rely on MSMQ DTC and your code can deal with ReceiveOnly Transaction).
Thanks Daniel. That seems to align with my original thinking.
The product teams have rejected MSMQ. They have plans to adopt RabbitMQ… but that’s still in the planning phases.
I KNOW that I am heavily reliant on MS-DTC right now. So, I am trying to decide if my POC efforts should focus on the idea of keeping the existing back-end in place (with MSMQ/DTC) so that the first phase only impacts the web client (full replacement using ASP Core)… OR… should I focus on updating the back-end to RabbitMQ (and removing the business logic that relies on DTC) right out of the gate.
I will flatly admit that I am motivated to favor approach #1 simply b/c the changes to the web client would have much more visibility at a business level. Furthermore, as you implied… the existing MSMQ/DTC endpoints work perfectly well.
Thanks again for your thoughtful questions and guidance!