Executing Warm-up/Start-up Tasks/HealthChecks Before Message Processing Begins

We have a few NSB handlers that are quite latency sensitive, so we would like to ensure that some data is pre-cached in the app that is handling messages before message handling is allowed to begin. Most of this is warm-up/start-up async calls, such as HTTP communication. Our 2 places we have thought of to do this so far are:

  1. In the same startup flow as where we are registering DI dependencies and starting up the app, before we call hostBuilder.UseNServiceBus(endpointConfiguration);. The downside being that this context may be non-async, causing us to have to do sync over async. This is likely just an issue of the function we are looking to place it in wasn’t async, but ASP.NET Core supports having an async main, so we should be able to overcome this.
  2. In an IHostedService’s StartAsync, that whether this warmup IHostedService is registered before or after hostBuilder.UseNServiceBus(endpointConfiguration);, we think that all IHostedService’s registered should complete their StartAsync before NSB message handling begins.

We are using ASP.NET core 8, 9, 10 apps with the standard NServiceBus.Extensions.Hosting. Any better suggestions? One thing I am really interested in getting documented is if NSB message handling works the same way as ASP.NET Controller Actions in regard to not starting the processing until all IHostedService StartAsync are completed as documented here.

StartAsync

StartAsync(CancellationToken) contains the logic to start the background task. StartAsync is called before:

StartAsync should be limited to short running tasks because hosted services are run sequentially, and no further services are started until StartAsync runs to completion.

Can we also get documented the NServiceBus behavior in regard to Health checks in ASP.NET Core? We are deploying our containers in k8s and there are readiness probes that call a common health check endpoint we have. Each app may have different health checks implemented based on infrastructure they depend on, but the result is that our kgateway will not route API requests to the container until a readiness check has passed.

Does NServiceBus have any mechanism that makes it work in this regard, that NSB message handling will not begin until all App health checks have succeeded?

Could you explain what it means that “handlers that are quite latency sensitive”?

I’ll provide some code in the next reply with a potential answer, but I’m wondering why handlers are latency sensitive and why having something “warm up” at startup helps fix this? What is the problem you’re facing and trying to solve with a startup solution?

NServiceBus doesn’t have a health-check or anyhting similar. ASP.NET also doesn’t have a warmup check, but it does have healthchecks. But it doesn’t prevent you from sending HTTP requests to it or anything. It’s just there to verify if your service is still healthy, whatever that means for your app.

It’s like a container that reports it’s healthy, but the website is still down. Or a container that reports it’s healthy, but the background service in it definitely isn’t picking up any work it as it should.

You could create something like a WarmUpService:

public class WarmUpService(WarmUpState warmUpState, ILogger<WarmUpService> logger) : IHostedService
{
    public async Task StartAsync(CancellationToken cancellationToken)
    {
        logger.LogInformation("Warm-up starting...");

        // Simulate async warm-up work such as HTTP calls to populate caches.
        // Replace these with real calls in your application.
        await Task.Delay(TimeSpan.FromSeconds(2), cancellationToken);
        logger.LogInformation("Warm-up: cache A populated");

        await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken);
        logger.LogInformation("Warm-up: cache B populated");

        warmUpState.MarkReady();
        logger.LogInformation("Warm-up complete");
    }

    public Task StopAsync(CancellationToken cancellationToken) => Task.CompletedTask;
}

and an ASP.NET warmup healthcheck

public class WarmUpHealthCheck(WarmUpState warmUpState) : IHealthCheck
{
    public Task<HealthCheckResult> CheckHealthAsync(
        HealthCheckContext context,
        CancellationToken cancellationToken = default)
    {
        return Task.FromResult(warmUpState.IsReady
            ? HealthCheckResult.Healthy("Warm-up complete")
            : HealthCheckResult.Unhealthy("Warm-up in progress"));
    }
}

And here’s the state:

public class WarmUpState
{
    readonly TaskCompletionSource ready = new(TaskCreationOptions.RunContinuationsAsynchronously);

    public bool IsReady => ready.Task.IsCompleted;

    public Task WaitUntilReady(CancellationToken cancellationToken = default)
    {
        return ready.Task.WaitAsync(cancellationToken);
    }

    public void MarkReady() => ready.TrySetResult();
}

Then in the startup have this:

var builder = WebApplication.CreateBuilder(args);

// 1. Register the shared warm-up state as a singleton
builder.Services.AddSingleton<WarmUpState>();

// 2. Register the warm-up hosted service BEFORE UseNServiceBus.
//    IHostedService.StartAsync calls run sequentially in registration
//    order, so WarmUpService.StartAsync will complete before NServiceBus
//    starts its hosted service and begins processing messages.
builder.Services.AddHostedService<WarmUpService>();

// 3. Configure NServiceBus. Because this registers its own IHostedService
//    internally, and it comes AFTER WarmUpService, message processing
//    will not begin until warm-up is complete.
var endpointConfiguration = new EndpointConfiguration("HealthCheckSample");
endpointConfiguration.UseTransport<LearningTransport>();
endpointConfiguration.UseSerialization<NewtonsoftJsonSerializer>();
builder.UseNServiceBus(endpointConfiguration);

// 4. Register the health check. In Kubernetes, map the readiness probe
//    to /health/ready so the gateway does not route traffic until the
//    app is fully warmed up.
builder.Services
    .AddHealthChecks()
    .AddCheck<WarmUpHealthCheck>("warm-up");

var app = builder.Build();

// Map health check endpoints for Kubernetes probes:
//   Readiness: /health/ready  (includes the warm-up check)
//   Liveness:  /health/live   (always returns healthy)
app.MapHealthChecks("/health/ready");
app.MapHealthChecks("/health/live", new()
{
    Predicate = _ => false // no checks, always healthy
});

app.Run();

Now your monitoring service can verify if it’s been warmed up and NServiceBus doesn’t process messages until the warmup phase has been completed.

1 Like

Thank for the response and excellent suggestion, Dennis. I think what you suggested is just what I needed to know.

Since you asked about what I mean by “handlers that are quite latency sensitive”, I think what I really mean is that we have some processes/workflows where we may receive a webhook from an external system and internally we send a message that goes through a handler that may go through 2-3 more handlers in various microservices and then need to send an HTTP request back the external system and all this has to occur in 30s or maybe 60s or we have a negative impact to our business, such as missed sales opportunities. If one of the 3 or 4 handlers that process a message in this workflow has to use input form an external system or just some internal data source that has variable latency and causes a pause for 45s that could cost us. So, if we are able to, we keep data cached in memory or in Redis in hopes of reducing some of this variability.

That makes sense, especially with the scaling out. So the idea is that you start scaling out, but when messages are already processed, and the endpoint has a cold cache, it takes longer to process them. And that’s why you want the cache to be hot before processing messages.

:+1: