Let’s suppose we’ve got a web API using ASP.NET Core. As part of an API request, we want to write some data and send a message. However, we’re in a situation where we can’t coordinate the transaction of business data and messages, using something like Azure Service Bus or RabbitMQ.
The outbox right now only works inside of a handler, so how do we use the outbox when we’re not in the context of a handler? In this case, we’re not consuming anything off the outbox in our web API request, but we are producing messages.
that’s something not doable with Outbox at the moment. We are investigating option in the web reliability space, though.
At the moment your best bet is:
Handle the incoming HTTP request
Send a message to yourself via SendLocal
In the SendLocal handling
store the data
send the outgoing message(s)
(3) will be Outbox enabled. Obviously if the client is waiting for an HTTP response that depends on the data stored at (3.1) the mentioned approach doesn’t work as the incoming HTTP request is already gone, and the only option is to revert back to something like SignalR, that indeed complicates things.
Ah, which a client waiting on some kind of response is going to be the vast majority of the cases. If you’re focusing exclusively on ASP.NET Core, that should make your life a lot easier. It’s far, far easier to add middleware with something like services.AddNServiceBus
If the client needs a response you might be able to use the callbacks to make all this happen behind the scenes. (web server scale out will be tricky tho)
Ie offloading to messaging and save on some web server threads if things take some time
If I’m doing something like the claim check pattern though, I absolutely need that database transaction to happen as part of the request. I don’t really want to shim in async messaging just to use the outbox.
I have a solution I’ve used in MongoDB and SQL, but it’s really not that pretty as I have to create my own messages in my own outbox then “republish” in a background process.
Probably a separate thing, but y’all really should have much deeper integration/support for ASP.NET Core
What I used to recommend in the past is using SQL Server transport in the web tier (on the boundary of the system) and some other transport, like RabbitMQ inside the system. This allows me to use the same SQL transaction to store data and send messages.
To make this approach work I recommend using NServiceBus.Router to move messages between the transports in a transparent way. The Router supports all types of communication (send, reply, publish).
If you look at this patter from outside, it is just the Outbox but implemented in a distributed way:
The web endpoint stores the outbox entries (SQL Server transport messages)
The Router endpoint removes the outbox entries and dispatches “real” messages
There is a sample that demonstrates how to use this approach to do atomic update-and-publish in the web controller.
@jbogard I tried to simplify that a bit and created NSeviceBus.Connector.SqlServer (source here) which allows you to easy inject into a controller an instance of IMessageSession that shares connection and transaction with data access library.
The controller code is not aware of the connector. It just uses the session:
public SendMessageController(IMessageSession messageSession, SqlConnection conn, SqlTransaction trans)
{
this.messageSession = messageSession;
this.conn = conn;
this.trans = trans;
}
[HttpGet]
public async Task<string> Get()
{
await messageSession.Send(new MyMessage())
.ConfigureAwait(false);
await messageSession.Publish(new MyEvent())
.ConfigureAwait(false);
using (var command = new SqlCommand("insert into Widgets default values;", conn, trans))
{
await command.ExecuteNonQueryAsync()
.ConfigureAwait(false);
}
return "Message sent to endpoint";
}
In the setup code:
//Connects to MSMQ transport used by other endpoints
var connectorConfig = new ConnectorConfiguration<MsmqTransport>(
name: "WebApplication",
sqlConnectionString: ConnectionString,
customizeConnectedTransport: extensions => {},
customizeConnectedInterface: configuration =>
{
//Required because connected transport (MSMQ) does not support pub/sub
configuration.EnableMessageDrivenPublishSubscribe(storage);
});
//Routing for commands
connectorConfig.RouteToEndpoint(typeof(MyMessage), "Samples.ASPNETCore.Endpoint");
//Start the connector
connector = connectorConfig.CreateConnector();
await connector.Start();
//Register per-request SQL Server connection
services.UseSqlServer(ConnectionString);
//Register automatic opening of per-request transaction
services.UseOneTransactionPerHttpCall();
//Register per-request connector-based message session based on connection/transaction
services.UseNServiceBusConnector(connector);
There is a sample included in the repo that I intend to eventually move to NServiceBus doco site. My goal is to package the router and the send-only endpoint in such a way that for the user it looks just like an ordinary NServiceBus send-only endpoint. The only difference is the fact that it requires two queues – one in the SQL Server and the other in the external transport. That second queue is used for managing pub/sub in case the external transport does not handle it natively.
@SzymonPobiega I want to implement the outbox pattern in my web api’s and was looking into the NServiceBus.Connector.SqlServer package. Please correct me if I am wrong, but I don’t believe this solution supports multi-tenancy (which is something that I require for my implementation). Are you able to suggest some solutions that would support multi-tenancy? I have considered callbacks and SendLocal, but they are both not ideal solutions. At this point, I am inclined to implement my own outbox table, but would love to hear alternate solutions
I was able to leverage some of the built-in classes to write to the NServiceBus outbox table. I have a simple solution that contains this implementation here.
Could you please take a look whenever you have a chance and let me know if there are any issues with this approach? The main code files are UnitOfWorkFilter.cs, WebHostedOutboundMessageBehavior.cs and PendingOperationsDispatcher.cs
I also intend to have a separate process to clean up the Outbox table as well as to reprocess any dispatch failures.
I’ve modified the way the SqlConnection is created in the connector sample (the one included in the repo) and it seems to work. Here’s the new code:
serviceCollection.AddScoped((serviceProvider) =>
{
var httpContext = serviceProvider.GetService<IHttpContextAccessor>().HttpContext;
if (httpContext.Request.Query
.TryGetValue("tenant", out var tenant))
{
return new SqlConnection(connectionString.Replace("initial catalog=connector", $"initial catalog=connector-{tenant}"));
}
return new SqlConnection(connectionString);
});
for simplicity reasons it just takes the tenant ID from the query string and uses it to build the connection string. Each tenant has its own catalog within the same instance of SQL Server. The router built into the connector is configured to use a shared catalog for messages so what ends up happening when handling a request is a single transaction that includes storing data in the tenant catalog and sending a message to a queue table in a shared catalog. Because SQL Server supports atomic transactions that include multiple catalogs, it works fine.
What is not possible is having tenant catalogs in different SQL Server instances. You could overcome that by including a concept of sharding (multiple shards, each shard contains many tenants, each shard uses a different SQL Server instance). That would require running a separate instance of router for each shard. The Connector could be easily extended to allow for passing multiple connection strings and spinning up a router for each connection string.
Thank you @SzymonPobiega for sharing this! Our tenant databases are across different SQL Server instances. I will look into the sharding approach and running a separate instance of router per shard.
@adeepak12 for a small number of tenant databases (<10) the connector could be modified to handle multiple databases. That should be an easy change. For a larger number of tenant databases that would probably not be a good solution because each tenant DB has to be polled for new messages. The number of SQL queries issued per second would grow with each tenant regardless if that tenant is busy or not.
An alternative that would work with SQL Azure is Elastic Transaction. You can have one shared DB for the messages and each tenant would have a separate data DB. If these DBs are linked via ET, you would be able to modify the data and send a message in a single TransactionScope. For that to work the Connector would have to support the TransactionScope mode (another small change).
@SzymonPobiega Agreed, I don’t think this will scale beyond a small number of tenant db’s. At this point, I am inclined towards writing to the local outbox table. Thank you for your suggestions, I really appreciate it!