I have some questions concerning how to structure a DevOps CI/CD workflow when implementing SOA services using Azure DevOps and Infrastructure as Code. The physical implementation ownership of the IaC is what I’m really interested in.
For background, I have worked on a few “microservice” projects and the typical CI/CD workflow was to build docker images, push to a registry, and then deploy to a kubernetes cluster using Helm charts. Infrastructure dependencies specific to a single deployment unit (microservice) such as its database, storage account, managed identity, key vault, etc are created from the same Azure Pipeline using IaC (Infrastructure as Code). Depending on the pipeline stage/environment the specific infrastructure and resulting configuration data (e.g. connection string, secrets, etc) are stored in the keyvault for the service so that it can obtain it when the docker container starts. In short, the service’s repository contains its infrastructure and application code, and configuration data comes from keyvault.
Now with an SOA approach as I understand things, each business service and also ITOPs service have their own individual source repository, each of which has a build pipeline that only publishes packages, be they NuGet for dotnet based code or npm for node, etc, etc. Point being that the packaging pipeline does not deploy to any physical environment but only to package feeds.
Obviously we need to deploy hosts some how so this makes me think that their should be a repository (owned by ITOPs from a logical standpoint) per system, e.g. “mobile backend”, “Administration backend”, “Mobile APP”, “Admin portal”, etc, each of which references the required packages via a feed and uses ITOPs code to auto-register those packages for self-initialization.
These system repos don’t necessarily need to only produce a single deployable artifact either. They can be structured however it is necessary to meet performance and scaling requirements. The mobile backend as an example, could produce 2 docker images; one containing viemodel composition ACs and another image for just hosting ACs with sagas. Or perhaps split further and have a docker image per business service ACs.
From a deployment workflow perspective, Azure Devops has the ability to trigger a (downstream) pipeline defined in one repo after the successful completion of another (upstream) pipeline. So on successful business package deployment each system repo that uses the business packages can be triggered to build and deploy and pick up changes - assuming a wildcard package reference is used, e.g. 1.2.* .
So now the concern is what to do about infrastructure logically owned by a service when using Infrastructure as Code?
I say owned as there’s always some infrastructure that is cross-service, e.g. a kubernetes cluster. These will usually be managed by a separate repository containing code to create them and are only modified when changes are required to that shared infrastructure.
Let’s consider a business service that requires a SQL server database, where there may be several ACs that use the database. Those ACs could be hosted in different systems. NOTE: Just to complicate matters, also consider that a service’s handlers require the use of an outbox due to persisting domain data changes and publishing messages in one handler.
If we assume a repository where the IaC code exists actually has release stages and executes its IaC during these stages as opposed to merely publishing it as a package during a build step, in what repository should the IaC for the database exist?
- In the business service’s repository.
Putting it here couples the logical business service to physical aspects. Feels wrong but does keep physical artifacts close to the service that uses them.
Also, would need to consider technical requirements around using an Outbox. For the latter to work it’s table needs to exist in the same database as the service’s database, however if 2 service’s ACs both want outbox behaviour then they must all share the same database. This complicates hosting ACs from multiple services in the same host.
- In the system’s repository.
Putting it here keeps the host and other infrastructure together meaning the system owns the need for infrastructure beyond just its NSB requirements such as Persistence or Outbox. The system’s host projects determine which packages it is using and essentially is forcing them to use the database it will provide, i.e. ensuring a business service uses the same database as the host’s persistence database ensures the proper setup of outbox behaviour.
Question then arises about how to deal with a business AC with a database requirement that’s used across different systems. The AC should only ever use the one source of data so now which system would own the database creation code? Or do ITOops limit the use of such ACs to only one system?
- In a separate repository entirely.
This feels closer to a traditional approach where ITOps would manually create all the required environment and provide configuration data through configuration management. It supports cross-system infrastructure much like the one for a kubernetes cluster so you could argue to bundle them together.
I prefer no. 2 as it allows for spinning-up a new system relatively easily, say for a weekly performance test. However I’m not sure if the limitation I mentioned will bite in the end.
I’d be interested to know how others have approached these operational concerns.
Sorry for the long post.