Simplifying configuration with Azure App Configuration

Simplifying configuration with Azure App Configuration

When working with NServiceBus on a decent sized system, you inevitably end up with lots of endpoints and lots of configuration. In this article, I’ll show you how Azure App Configuration has vastly simplified our architecture.

Using .NET Core, typically the configuration is stored in the appsettings.json file. We use an internal helper library and some strong typed configuration to help bootstrap our NServiceBus endpoints and logging.

This is an example of what an appsettings.json file might look like for a local development environment:

  "EndpointSettings": {
    "Environment": "Dev",
    "TransportConnection": "Data Source=(local);Initial Catalog=NSB;Integrated Security=True;",
    "PersistenceConnection": "Data Source=(local);Initial Catalog=BusinessDatabase;Integrated Security=True;",
    "ServiceControlQueue": "Particular.ServiceControl",
    "ServiceControlMetricsQueue": "Particular.Monitoring",
    "MonitoringEnabled": "True",
    "AuditSagaChanges": "True",
    "TransportType": "SQL",
    "MultiTenant": "False",
    "HostNameOverride": "",
    "HeartbeatFrequency": 10 
  "LoggingSettings": {
    "Papertrail": {
      "Enabled": true,
      "Server": "",
      "Port": 12345,
      "SyslogHostname": ""

For the most part, this configuration stays consistent across endpoints in a given environment, other than the PersistenceConnection which can differ per endpoint.

Until recently, all these endpoints ran as Windows services, and all the configuration was handled as part of the deployment process with Octopus Deploy. This has worked pretty well for us, however the developer experience was not perfect.

To simplify things the development experience, we moved to using the Microsoft.Extensions.Configuration.EnvironmentVariables library (, which as the name suggests, uses environment variables to store configuration. The base config was still loaded from appsettings.json, and then any matching environment variables would take precedence.


Now that we’re hosting some of our customers systems on Azure, we want to start taking advantage of the benefits that brings us, such as using Azure Service Bus and containers. The same endpoint code can use the SQL Transport when running on-premise, or Azure Service Bus in the cloud, by simply changing the TransportConnection and TransportType.

When running as containers, we have one image that is run across many environments, so we need a way to configure them. We’re currently running our endpoints (other than send-only website endpoints) with Azure Container Instances, and deploying using a multi-container group YAML file ( We’re able to pass environment variables in this config, that will override any settings that might be in the containers appsettings.json. Our first attempt looked like this:

apiVersion: '2018-10-01'
location: southcentralus
name: container-group-name
  - name: endpoint-name-1
      - name: EndpointSettings__Environment
        value: Test
      - name: EndpointSettings__TransportConnection
        secureValue: Endpoint=sb://;
      - name: EndpointSettings__PersistenceConnection
        secureValue: Data;Initial Catalog=DatabaseName;
      - name: EndpointSettings__ServiceControlQueue
        value: Particular.ServiceControl
      - name: EndpointSettings__ServiceControlMetricsQueue
        value: Particular.Monitoring
      - name: EndpointSettings__MonitoringEnabled
        value: 'True'
      - name: EndpointSettings__AuditSagaChanges
        value: 'False'
      - name: EndpointSettings__TransportType
        value: AzureServiceBus
      - name: EndpointSettings__MultiTenant
        value: 'True'
      - name: EndpointSettings__HostNameOverride
        value: endpoint-name-1
      - name: EndpointSettings__HeartbeatFrequency
        value: 60
      - name: LoggingSettings__Papertrail__Server
      - name: LoggingSettings__Papertrail__Port
        value: 12345
      - name: LoggingSettings__Papertrail__SyslogHostname
        value: endpoint-name-1

This worked, and got us up and running. However, for each endpoint we added, most of that config was simply repeated. Ensuring everything stayed in sync, and having to redeploy every time we made a config change became a chore.

Enter Azure App Configuration

Azure App Configuration (currently in preview) is a service that helps you centralize your application and feature settings (

With Azure App Configuration, you simply defines keys and values, with optional labels. For us, we want separation between our environments, so we have an App Configuration resource per environment.

Building on the work we’ve done to support environment variables, I wanted to see how we could integrate our endpoints with this service. This is the code I came up with:

private static IConfigurationRoot BuildConfiguration(string endpointName)
    var configBuilder = new ConfigurationBuilder()
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)

    var configuration = configBuilder.Build();

    var appConfigEndpoint = configuration["AppConfigurationEndpoint"];
    if (string.IsNullOrEmpty(appConfigEndpoint))
        return configuration;

    configBuilder.AddAzureAppConfiguration(options =>
            .Use(KeyFilter.Any, labelFilter:endpointName);
    configuration = configBuilder.Build();

    return configuration;

The first thing we do is load configuration from appsettings.json, and environment variables like normal. We then see if a configuration item named AppConfigurationEndpoint has been set, and if so, we use the AddAzureAppConfiguration and point to that endpoint.

When we run our endpoints in Azure, we’re using Managed Identities so that we’re not keeping credentials anywhere ( The developer experience with this is great, as Visual Studio has built in support to connect using your own developer credentials.

Those Use() method invocations you see tell the service what keys we are interested in. The first one essentially says “give me any keys with no label”, the second says “give me any keys with this label”. This means for any environment-wide settings such as the Service Control Queue and Transport Connection, this can be simply set once. Any endpoint specific settings or overrides, can be labeled with the endpoint name.

Here’s what this looks like in the Azure Portal for our two fictitious endpoints named Billing and Sales:

For a developer, all we have to do now is create an environment variable named AppConfigurationEndpoint and point it against the appropriate configuration store. For our containers, all those environment variables in the YAML file simply get reduced to this:

  - name: endpoint-name-1
      - name: AppConfigurationEndpoint

If we make a change to the config in the portal, we simply restart the endpoint!

I hope this is useful for others!


Hi Mark

We use App configuration service as well to configure some of your internal backends. Interesting approach to use the endpoint name as a label filter. Currently we use the labels to separate the environments


I like the endpoint labeling as there are a lot of common settings between endpoints, so it really reduces duplication. I thought about labeling per environment too, I could easily have Dev and then a Dev-Billing label.

With how easy it is to copy settings between App Configuration endpoints, it seemed a little simpler to me to have an App Configuration endpoint per environment. This works really great in combination with managed identities. Once the Service Bus team fixes an issue with the Management Client, I won’t have any stored credentials anymore.