SOA deployment guidance when considering infrastructure dependencies

I have some questions concerning how to structure a DevOps CI/CD workflow when implementing SOA services using Azure DevOps and Infrastructure as Code. The physical implementation ownership of the IaC is what I’m really interested in.

For background, I have worked on a few “microservice” projects and the typical CI/CD workflow was to build docker images, push to a registry, and then deploy to a kubernetes cluster using Helm charts. Infrastructure dependencies specific to a single deployment unit (microservice) such as its database, storage account, managed identity, key vault, etc are created from the same Azure Pipeline using IaC (Infrastructure as Code). Depending on the pipeline stage/environment the specific infrastructure and resulting configuration data (e.g. connection string, secrets, etc) are stored in the keyvault for the service so that it can obtain it when the docker container starts. In short, the service’s repository contains its infrastructure and application code, and configuration data comes from keyvault.

Now with an SOA approach as I understand things, each business service and also ITOPs service have their own individual source repository, each of which has a build pipeline that only publishes packages, be they NuGet for dotnet based code or npm for node, etc, etc. Point being that the packaging pipeline does not deploy to any physical environment but only to package feeds.

Obviously we need to deploy hosts some how so this makes me think that their should be a repository (owned by ITOPs from a logical standpoint) per system, e.g. “mobile backend”, “Administration backend”, “Mobile APP”, “Admin portal”, etc, each of which references the required packages via a feed and uses ITOPs code to auto-register those packages for self-initialization.

These system repos don’t necessarily need to only produce a single deployable artifact either. They can be structured however it is necessary to meet performance and scaling requirements. The mobile backend as an example, could produce 2 docker images; one containing viemodel composition ACs and another image for just hosting ACs with sagas. Or perhaps split further and have a docker image per business service ACs.

From a deployment workflow perspective, Azure Devops has the ability to trigger a (downstream) pipeline defined in one repo after the successful completion of another (upstream) pipeline. So on successful business package deployment each system repo that uses the business packages can be triggered to build and deploy and pick up changes - assuming a wildcard package reference is used, e.g. 1.2.* .

So now the concern is what to do about infrastructure logically owned by a service when using Infrastructure as Code?

I say owned as there’s always some infrastructure that is cross-service, e.g. a kubernetes cluster. These will usually be managed by a separate repository containing code to create them and are only modified when changes are required to that shared infrastructure.

Let’s consider a business service that requires a SQL server database, where there may be several ACs that use the database. Those ACs could be hosted in different systems. NOTE: Just to complicate matters, also consider that a service’s handlers require the use of an outbox due to persisting domain data changes and publishing messages in one handler.

If we assume a repository where the IaC code exists actually has release stages and executes its IaC during these stages as opposed to merely publishing it as a package during a build step, in what repository should the IaC for the database exist?

  1. In the business service’s repository.

Putting it here couples the logical business service to physical aspects. Feels wrong but does keep physical artifacts close to the service that uses them.

Also, would need to consider technical requirements around using an Outbox. For the latter to work it’s table needs to exist in the same database as the service’s database, however if 2 service’s ACs both want outbox behaviour then they must all share the same database. This complicates hosting ACs from multiple services in the same host.

  1. In the system’s repository.

Putting it here keeps the host and other infrastructure together meaning the system owns the need for infrastructure beyond just its NSB requirements such as Persistence or Outbox. The system’s host projects determine which packages it is using and essentially is forcing them to use the database it will provide, i.e. ensuring a business service uses the same database as the host’s persistence database ensures the proper setup of outbox behaviour.

Question then arises about how to deal with a business AC with a database requirement that’s used across different systems. The AC should only ever use the one source of data so now which system would own the database creation code? Or do ITOops limit the use of such ACs to only one system?

  1. In a separate repository entirely.

This feels closer to a traditional approach where ITOps would manually create all the required environment and provide configuration data through configuration management. It supports cross-system infrastructure much like the one for a kubernetes cluster so you could argue to bundle them together.

I prefer no. 2 as it allows for spinning-up a new system relatively easily, say for a weekly performance test. However I’m not sure if the limitation I mentioned will bite in the end.

I’d be interested to know how others have approached these operational concerns.

Sorry for the long post. :slight_smile:

You can say that again!!! :wink:

TL;DR;

IT/OPS is responsible and owns configuration.

The longer answer…

Note 1: I have zero knowledge of what IaC is or what it looks like.

Note 2: I will probably not answer every question you have or respond to every statement. I hope my answer is valuable. You can always respond with follow-up questions :wink:

First of all, I have the feeling that you say “Which AC owns something?” but they don’t own anything. The logical service is the owner of something, because that’s where the boundaries are. Obviously an AC is responsible for something, but it’s not the owner of some data in a database. The service is the owner and multiple ACs could access the same data.

Second, IT/OPS are the owner of the infrastructure and thus things like connectionstrings. So theoretically, during deployment, IT/OPS fills in connectionstrings to databases and other stuff.

If everything to IaC is stored in a repository, then IT/OPS would control that. The problem is when the same repository also stores stuff for testing or even development purposes. I can’t really see how that would work. But imagine if configuration is stored inside an XML file, then the XML configuration could be stored in the repository where the code is, all within a single service boundary. IT/OPS would do the deployment and change the connectionstrings to something else.

Sometimes that’s easy. A cluster of SQL Servers could be a single connectionstring and the code doesn’t need to know. If you have a read and write database, the code needs to know. That’s when IT/OPS enforces some policies onto the service that is responsible for the business logic and code. It could also be that you’re running in Azure Functions instead of a Windows Service. IT/OPS is responsible for setting this non-functional requirement and enforces this policy onto the services (or their teams, if you’ve separated the services across their own teams).

Does that make sense and answer your questions?

Thanks for the response @Dennis .

Note 1: I have zero knowledge of what IaC is or what it looks like.

FYI It’s code that defines what infrastructure to create and how to configure those resources using a desired state process. In short it means you don’t manually create resources in cloud provider portals but instead use a CI/CD processes for adding quality to the infrastructure configuration. Example products supporting this are ARM (Azure Resource Management) templates, Terraform, and Pulumi.

First of all, I have the feeling that you say “Which AC owns something?” but they don’t own anything. The logical service is the owner of something, because that’s where the boundaries are.

I don’t think I actually said the AC owns the resources and only meant that the service does, and by service I mean the logical business service. The ACs only make use of the resources owned by the service. Sorry if I confused.

Second, IT/OPS are the owner of the infrastructure and thus things like connectionstrings. So theoretically, during deployment, IT/OPS fills in connectionstrings to databases and other stuff.

I get that ITOPS owns configuration data, and I also agree ITOPS owns the infrastructure. This would imply that ITOPS writes the infrastructure code too isolated from business service repositories.

As for how to structure infrastructure code across ITOPS repositories, that may depend on the resource and its cross-cutting concerns if any. For example, I would have a separate repo for creating a kubernetes cluster which is used by many web apps, and where a single web app requires a database instance I’d keep the database IaC code in the same web app repo. When I say web app here I mean a container for ACs. ITOps owns the web app, not the ACs.

Thanks for your input Dennis. You’ve helped clarify my thoughts.