Today I will be using the traffic-splitting capability of Azure Container Apps. Azure Container Apps implements container app versioning by creating revisions. A revision is an immutable snapshot of a container app version.

The revision management made me excited with its traffic splitting capabilities, which we can leverage to deliver Blue-Green deployments and/or A/B testing.

Generally, if we have an ingress-enabled container app available via HTTP, we can use A/B testing at ease. We can deploy a new revision and dictate the traffic weight (load percentage) that we want for that newly deployed revision. It is that easy, and when we are “confident” about the stability of that revision (learning from our observability/monitoring systems) we can increase the weight to 100% and deactivate the old revisions. This is a very neat concept but not a new concept. I have done this before on Kubernetes using Linkerd traffic-splitting features. What made me impressed is how easy it is to achieve on Azure Container Apps, you hardly do anything special besides define a pipeline that deploys Bicep templates. All the complexities for achieving this (i.e., configuring and using Envoy proxies) are abstracted from our workflow entirely.
Deploy new revision and gradually increase traffic weight
I have a frontend application (ingress enabled). Here I will see if I can deploy new revision for this application and gradually increase traffic weight to this new version and at some point, remove the older revision. I would start by defining a bicep module for a container app, so I can later use a common module for multiple apps to provision.
// Removed other code for brevity
param trafficDistribution array = [
{
latestRevision: true
weight: 100
}
]
var sanitizedRevisionSuffix = substring(revisionSuffix, 0, 10)
var useCustomRevisionSuffix = revisionMode == 'Multiple'
resource containerApp 'Microsoft.App/containerApps@2022-03-01' = {
name: containerAppName
location: location
identity: hasIdentity ? {
type: 'UserAssigned'
userAssignedIdentities: {
'${uami.id}': {}
}
} : null
properties: {
managedEnvironmentId: environment.id
configuration: {
activeRevisionsMode: revisionMode
secrets: secrets
registries: isPrivateRegistry ? [
{
server: containerRegistry
identity: useManagedIdentityForImagePull ? uami.id : null
username: useManagedIdentityForImagePull ? null : containerRegistryUsername
passwordSecretRef: useManagedIdentityForImagePull ? null : registryPassword
}
] : null
ingress: enableIngress ? {
external: isExternalIngress
targetPort: containerPort
transport: 'auto'
traffic: trafficDistribution
} : null
dapr: {
enabled: true
appPort: containerPort
appId: containerAppName
}
}
template: {
revisionSuffix: useCustomRevisionSuffix ? sanitizedRevisionSuffix : null
containers: [
{
image: containerImage
name: containerAppName
env: env
}
]
scale: {
minReplicas: minReplicas
maxReplicas: 1
}
}
}
}
output fqdn string = enableIngress ? containerApp.properties.configuration.ingress.fqdn : 'Ingress not enabled'
With this generalized module, we can now create our frontend application. the Bicep for that would look like following:
module frontendApp 'modules/httpApp.bicep' = {
name: appNameFrontend
params: {
location: location
containerAppName: appNameFrontend
environmentName: acaEnvironment.name
revisionMode: 'Multiple'
trafficDistribution: [
{
revisionName: 'PREV'
weight: 80
}
{
revisionName: 'NEXT'
label: 'latest'
weight: 20
}
]
revisionSuffix: revisionSUffix
hasIdentity: true
userAssignedIdentityName: uami.name
containerImage: '${containerRegistryName}.azurecr.io/frontend:${tagName}'
containerRegistry: '${containerRegistryName}.azurecr.io'
isPrivateRegistry: true
containerRegistryUsername: ''
registryPassword: ''
useManagedIdentityForImagePull: true
containerPort: 80
enableIngress: true
isExternalIngress: true
minReplicas: 1
}
}
The part I want to emphasize is the trafficDistribution
property in above snippet. Here we can just provide the revision names and the respective traffic weights, However, in this case, I am going with two placeholder strings – PREV and NEXT (instead of actual revision names). My idea is to replace them in pipeline stages and redeploy new traffic configuration. Essentially, in a pipeline stage I can determine the appropriate stage names and replace these placeholders in this file and redeploy with new weight config.
Prepare traffic weights in pipeline stage
Here’s an example script that dynamically grabs the last revision name (using Azure CLI) and replaces the bicep file above with the correct revision names.
#!/bin/bash
COMMITHASH=$1
FileName=$2
echo "Starting script...Commit Hash received $COMMITHASH and file name $FileName"
az config set extension.use_dynamic_install=yes_without_prompt
az extension add -n containerapp
nextRevisionName="xeniel-frontend--${COMMITHASH:0:10}"
previousRevisionName=$(az containerapp revision list -n xeniel-frontend -g xeniel --query '[0].name')
prevNameWithoutQuites=$(echo $previousRevisionName | tr -d "\"") # using sed echo $pname | sed "s/\"//g"
sed -i "s/PREV/$prevNameWithoutQuites/g" ${PWD}/Infrastructure/$FileName
sed -i "s/NEXT/$nextRevisionName/g" ${PWD}/Infrastructure/$FileName
GitHub workflow for Frontend app
With these, we are now ready to create a designated GitHub workflow to release our frontend application. Our first job in workflow is unsurprisingly the docker build image and tag it with Git commit hash. This job also pushes the image to Azure container registry.
jobs:
build-frontend-image:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: tenhaus/get-release-or-tag@v2
id: tag
- name: OIDC Login to Azure
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
enable-AzPSSession: false
- name: Building container images
run: ${PWD}/CognitiveDemo/build-frontend.sh $ImageTag $RegistryName
env:
ImageTag: ${{ steps.tag.outputs.tag }}
RegistryName: "xenielscontainerregistry.azurecr.io"
- name: Azure logout
run: az logout
We can now have another job in the workflow that deploys a new revision with the image we built above, and this time will only take 20 percent of traffic to the new revision.
deploy-frontend-images:
runs-on: ubuntu-latest
needs: build-frontend-image
steps:
- uses: actions/checkout@v2
- uses: tenhaus/get-release-or-tag@v2
id: tag
- name: OIDC Login to Azure
uses: azure/login@v1
with:
client-id: ${{ secrets.AZURE_CLIENT_ID }}
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
enable-AzPSSession: false
- name: Prepare Revisions
run: ${PWD}/Infrastructure/prepare-revisions.sh $ImageTag $FileName
env:
ImageTag: ${{ steps.tag.outputs.tag }}
FileName: "frontend.bicep"
- name: Deploy Bicep Template
uses: Azure/arm-deploy@main
with:
scope: resourcegroup
subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
resourceGroupName: ${{ env.AZURE_RESOURCE_GROUP }}
template: ./Infrastructure/frontend.bicep
parameters: 'tagName=${{ steps.tag.outputs.tag }}'
- name: Azure logout
run: az logout
Once it deployed, we will see that the new revision is indeed serving 20 percent of our total traffic. I haven’t integrated any monitoring system into this, but one can integrate any metric they care for their application to this pipeline and gradually redeploy the revision with increased weight after a certain time. I only used “delay” in my pipeline to simulate the behavior. It looks like below:

You can see I have multiple stages, starting the one that builds and pushes the image, next one deploys with 20% weight, then (after a delay) 50% weight, after a delay 100% weight and finally deactivating the old revisions. This works like a charm. You can see the Workflow here.
To be continued
That is all for today!
Next, I will be looking into the network isolation, private endpoints for Azure Container apps etc. Stay tuned!
The entire source code can be found in GitHub – if you want to take a look.
Thanks for reading.