Azure DevOps Multi-Stage pipelines for Enterprise AKS scenarios

Background

Multi-Stage Azure pipelines enables writing the build (continuous integration) and deploy (continuous delivery) in Pipeline-as-Code (YAML) that gets stored into a version control (Git repository). However, deploying in multiple environments (test, acceptance, production etc.) needs approvals/control gates. Often different stakeholders (product owners/Operations folks) are involved into that process of approvals. In addition to that, restricting secrets/credentials for higher-order stages (i.e. production) from developers are not uncommon.

Good news is Azure DevOps allows doing all that, with notions called Environment and resources. The idea is environment (e.g. Production) are configured with resources (e.g. Kubernetes, Virtual machine etc.) in them, then “approval policy” configured for the environment. When a pipeline targets environment in deployment stage, it pauses with a pending approval from responsible authorities (i.e. groups or users). Azure DevOps offers awesome UI to create environments, setting up approval policies.

The problem begins when we want to automate environment creations to scale the process.

Problem statement

As of today (while writing this article)- provisioning and setting up approve policies for environments via REST API is not documented and publicly unavailable – there is a feature request awaiting.
In this article, I will share some code that can be used to automate provisioning environment, approval policy management.

Scenario

It’s fairly common (in fact best practice) to logically isolate AKS clusters for separate teams and projects. To minimize the number of physical AKS clusters we deploy to isolate teams or applications.

With logical isolation, a single AKS cluster can be used for multiple workloads, teams, or environments. Kubernetes Namespaces form the logical isolation boundary for workloads and resources.

When we setup such isolation for multiple teams, it’s crucial to automate the bootstrap of team projects in Azure DevOps– setting up scoped environments, service accounts so teams can’t deploy to namespaces of other teams etc. The need for automation is right there – and that’s all this article is about.

The process I am trying to establish as follows:

  1. Cluster Administrators will provision a namespace for a team (GitOps )
  2. Automatically create an Environment for the team’s namespace and configure approvals
  3. Team 1 will use the environment in their multi-stage pipeline

Let’s do this!

Provision namespace for teams

It all begins with a demand from a team – they need a namespace for development/deployment. The cluster administrators would keep a Git repository that contains the Kubernetes manifest files describing these namespaces. And there is a pipeline that applies them to the cluster each time a new file is added/modified. This repository will be restricted to the Cluster administrators (operation folks) only. Developers could issue a pull request but the PR approvals and commits to master should only be accepted by a cluster administrator or people with similar responsibility.

After that, we will create a service account for each of the namespaces. These are the accounts that will be used later when we will define Azure DevOps environment for each team.

Now the pipeline for this repository essentially applies all the manifests (both for namespaces and services accounts) to the cluster.

trigger:
- master
stages:
- stage: Build
  displayName: Provision namespace and service accounts
  jobs:  
  - job: Build
    displayName: Update namespace and service accounts
    steps:
      <… omitted irrelevant codes …>
      - bash: |
          kubectl apply -f ./namespaces 
        displayName: 'Update namespaces'
      - bash: |
          kubectl apply -f ./ServiceAccounts 
        displayName: 'Update service accounts'   
      - bash: |
          dotnet ado-env-gen.dll
        displayName: 'Provision Azure DevOps Environments'       

At this point, we have service account configured for each namespace that we will use to create the environment, endpoints etc. You might notice that I have created some label for each service account (i.e. purpose=ado-automation), this is to tag along the Azure DevOps Project name to a service account. This will come handy when we will provision environments.

The last task that runs a .net core console app (i.e. ado-env-gen.dll) – which I will described in detail later in this article.

Provisioning Environment in Azure DevOps

NOTE: Provisioning environment via REST api currently is undocumented and might change in coming future – beware of that.

It takes multiple steps to create an Environment to Azure DevOps. The steps are below:

  1. Create a Service endpoint with Kubernetes Service Account
  2. Create an empty environment (with no resources yet)
  3. Connect the service endpoint to the environment as Resource

I’ve used .net (C#) for this, but any REST client technology could do that.

Creating Service Endpoint

Following method creates a service endpoint in Azure DevOps that uses a Service Account scoped to a given namespace.

        public async Task<Endpoint> CreateKubernetesEndpointAsync(
            Guid projectId, string projectName,
            string endpointName, string endpointDescription,
            string clusterApiUri,
            string serviceAccountCertificate, string apiToken)
        {
            return await GetAzureDevOpsDefaultUri()
                .PostRestAsync<Endpoint>(
                $"{projectName}/_apis/serviceendpoint/endpoints?api-version=6.0-preview.4",
                new
                {
                    authorization = new
                    {
                        parameters = new
                        {
                            serviceAccountCertificate,
                            isCreatedFromSecretYaml = true,
                            apitoken = apiToken
                        },
                        scheme = "Token"
                    },
                    data = new
                    {
                        authorizationType = "ServiceAccount"
                    },
                    name = endpointName,
                    owner = "library",
                    type = "kubernetes",
                    url = clusterApiUri,
                    description = endpointDescription,
                    serviceEndpointProjectReferences = new List<Object>
                    {
                        new
                        {
                            description = endpointDescription,
                            name =  endpointName,
                            projectReference = new
                            {
                                id =  projectId,
                                name =  projectName
                            }
                        }
                    }
                }, await GetBearerTokenAsync());
        }

We will find out how to invoke this method in a moment. Before that, Step 2, let’s create the empty environment now.

Creating Environment in Azure DevOps

        public async Task<PipelineEnvironment> CreateEnvironmentAsync(
            string project, string envName, string envDesc)
        {
            var env = await GetAzureDevOpsDefaultUri()
                .PostRestAsync<PipelineEnvironment>(
                $"{project}/_apis/distributedtask/environments?api-version=5.1-preview.1",
                new
                {
                    name = envName,
                    description = envDesc
                },
                await GetBearerTokenAsync());

            return env;
        }

Now we have environment, but it still empty. We need to add a resource into it and that would be the Service Endpoint – so the environment comes to life.

        public async Task<string> CreateKubernetesResourceAsync(
            string projectName, long environmentId, Guid endpointId,
            string kubernetesNamespace, string kubernetesClusterName)
        {
            var link = await GetAzureDevOpsDefaultUri()
                            .PostRestAsync(
                            $"{projectName}/_apis/distributedtask/environments/{environmentId}/providers/kubernetes?api-version=5.0-preview.1",
                            new
                            {
                                name = kubernetesNamespace,
                                @namespace = kubernetesNamespace,
                                clusterName = kubernetesClusterName,
                                serviceEndpointId = endpointId
                            },
                            await GetBearerTokenAsync());
            return link;
        }

Of course, environment needs to have Approval policies configure. The following method configures a Azure DevOps group as Approver to the environment. Hence any pipeline that reference this environment will be paused and wait for approval from one of the members of the group.

        public async Task<string> CreateApprovalPolicyAsync(
            string projectName, Guid groupId, long envId, 
            string instruction = "Please approve the Deployment")
        {
            var response = await GetAzureDevOpsDefaultUri()
                .PostRestAsync(
                $"{projectName}/_apis/pipelines/checks/configurations?api-version=5.2-preview.1",
                new
                {
                    timeout = 43200,
                    type = new
                    {                                   
                        name = "Approval"
                    },
                    settings = new
                    {
                        executionOrder = 1,
                        instructions = instruction,
                        blockedApprovers = new List<object> { },
                        minRequiredApprovers = 0,
                        requesterCannotBeApprover = false,
                        approvers = new List<object> { new { id = groupId } }
                    },
                    resource = new
                    {
                        type = "environment",
                        id = envId.ToString()
                    }
                }, await GetBearerTokenAsync());
            return response;
        }

So far so good. But we need to stich all these together. Before we do so, one last item needs attention. We would want to create a Service connection to the Azure container registry so the teams can push/pull images to that. And we would do that using Service Principals designated to the teams – instead of the Admin keys of ACR.

Creating Container Registry connection

The following snippet allows us provisioning Service Connection to Azure Container Registry with Service principals – which can have fine grained RBAC roles (i.e. ACRPush or ACRPull etc.) that makes sense for the team.

        public async Task<string> CreateAcrConnectionAsync(
            string projectName, string acrName, string name, string description,
            string subscriptionId, string subscriptionName, string resourceGroup,
            string clientId, string secret, string tenantId)
        {
            var response = await GetAzureDevOpsDefaultUri()
                .PostRestAsync(
                $"{projectName}/_apis/serviceendpoint/endpoints?api-version=5.1-preview.2",
                new
                {
                    name,
                    description,
                    type = "dockerregistry",
                    url = $"https://{acrName}.azurecr.io",
                    isShared = false,
                    owner = "library",
                    data = new
                    {
                        registryId = $"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.ContainerRegistry/registries/{acrName}",
                        registrytype = "ACR",
                        subscriptionId,
                        subscriptionName
                    },
                    authorization = new
                    {
                        scheme = "ServicePrincipal",
                        parameters = new
                        {
                            loginServer = $"{acrName}.azurecr.io",
                            servicePrincipalId = clientId,
                            tenantId,
                            serviceprincipalkey = secret
                        }
                    }
                },
                await GetBearerTokenAsync());
            return response;
        }

We came pretty close to a wrap. We’ll stitch all the methods above together. Plan is to create a simple console application will fix everything (using the above methods). Here’s the pseudo steps:

  1. Find all Service Account created for this purpose
  2. For each Service Account: determining the correct Team Project and
    • Create Service Endpoint with the Account
    • Create Environment
    • Connect Service Endpoint to Environment (adding resource)
    • Configure Approval policies
    • Create Azure Container Registry connection

The first step needs to communicate to the cluster – obviously. I have used the official .net client for Kubernetes for that.

Bringing all together

All the above methods are invoked from a simple C# console application. Below is the relevant part of the main method that brings all the above together:

        private static async Task Main(string [] args)
        {
            var clusterApiUrl = Environment.GetEnvironmentVariable("AKS_URI");
            var adoUrl = Environment.GetEnvironmentVariable("AZDO_ORG_SERVICE_URL");
            var pat = Environment.GetEnvironmentVariable("AZDO_PERSONAL_ACCESS_TOKEN");
            var adoClient = new AdoClient(adoUrl, pat);
            var groups = await adoClient.ListGroupsAsync();

            var config = KubernetesClientConfiguration.BuildConfigFromConfigFile();
            var client = new Kubernetes(config);

We started by collecting some secret and configuration data – all from environment variables – so we can run this console as part of the pipeline task and use pipeline variables at ease.

        var accounts = await client
            .ListServiceAccountForAllNamespacesAsync(labelSelector: "purpose=ado-automation");

This gets us the list of all the service accounts we have provisioned specially for this purpose (filtered using the labels).

            foreach (var account in accounts.Items)
            {
                var project = await GetProjectAsync(account.Metadata.Labels["project"], adoClient);
                var secretName = account.Secrets[0].Name;
                var secret = await client
                    .ReadNamespacedSecretAsync(secretName, account.Metadata.NamespaceProperty);

We are iterating all the accounts and retrieving their secrets from the cluster. Next step, creating the environment with these secrets.

                var endpoint = await adoClient.CreateKubernetesEndpointAsync(
                    project.Id,
                    project.Name,
                    $"Kubernetes-Cluster-Endpoint-{account.Metadata.NamespaceProperty}",
                    $"Service endpoint to the namespace {account.Metadata.NamespaceProperty}",
                    clusterApiUrl,
                    Convert.ToBase64String(secret.Data["ca.crt"]),
                    Convert.ToBase64String(secret.Data["token"]));

                var environment = await adoClient.CreateEnvironmentAsync(project.Name,
                    $"Kubernetes-Environment-{account.Metadata.NamespaceProperty}",
                    $"Environment scoped to the namespace {account.Metadata.NamespaceProperty}");

                await adoClient.CreateKubernetesResourceAsync(project.Name, 
                    environment.Id, endpoint.Id,
                    account.Metadata.NamespaceProperty,
                    account.Metadata.ClusterName);

That will give us the environment – correctly configured with the appropriate Service Accounts. Let’s set up the approval policy now:

                var group = groups.FirstOrDefault(g => g.DisplayName
                    .Equals($"[{project.Name}]\\Release Administrators", StringComparison.OrdinalIgnoreCase));
                await adoClient.CreateApprovalPolicyAsync(project.Name, group.OriginId, environment.Id);

We are taking a designated project group “Release Administrators” and set them as approves.

            await adoClient.CreateAcrConnectionAsync(project.Name, 
                Environment.GetEnvironmentVariable("ACRName"), 
                $"ACR-Connection", "The connection to the ACR",
                Environment.GetEnvironmentVariable("SubId"),
                Environment.GetEnvironmentVariable("SubName"),
                Environment.GetEnvironmentVariable("ResourceGroup"),
                Environment.GetEnvironmentVariable("ClientId"), 
                Environment.GetEnvironmentVariable("Secret"),
                Environment.GetEnvironmentVariable("TenantId"));

Lastly created the ACR connection as well.

The entire project is in GitHub – in case you want to have a read!

Verify everything

We have got our orchestration completed. Every time we add a new team, we create one manifest for their namespace and Service account and create a PR to the repository described above. A cluster admin approves the PR and a pipeline gets kicked off.

The pipeline ensures:

  1. All the namespaces and service accounts are created
  2. An environment with the appropriate service accounts are created in the correct team project.

Now a team can create their own pipeline in their repository – referring to the environment. Voila, all starts working nice. All they need is to refer the name of the environment that’s provisioned for their team (for instance “team-1”), as following example:

- stage: Deploy
  displayName: Deploy stage
  dependsOn: Build
  jobs:
  - deployment: Deploy
    condition: and(succeeded(), not(startsWith(variables['Build.SourceBranch'], 'refs/pull/')))
    displayName: Deploy
    pool:
      vmImage: $(vmImageName)
    environment: 'Kubernetes-Cluster-Environment.team-1'
    strategy:
      runOnce:
        deploy:
          steps:
          - download: current
            artifact: kube-manifests
          - task: KubernetesManifest@0
            displayName: Deploy to Kubernetes cluster
            inputs:
              action: deploy
              manifests: |
                $(Pipeline.Workspace)/kube-manifests/all-template.yaml

Now the multi-stage pipeline knows how to talk to the correct namespace in AKS with approval awaiting.

Conclusion

This might appear an overkill for small-scale projects, as it involves quite some overhead of development and maintenance. However, on multiple occasions (especially within large enterprises), I have experienced the need for orchestrations via REST API to onboard teams in Azure DevOps, bootstrapping configurations across multiple teams’ projects etc. If you’re on the same boat, this article might be an interesting read for you!

Thanks for reading!

Terraforming Azure DevOps

Background

In many organizations, specially in large enterprises there’s a need to automate Azure DevOps projects and Teams members. Manually managing large number of Azure DevOps projects, Teams for these projects and users to the teams, on-boarding and off-boarding team members are not trivial.

Besides managing the users sometimes, we just need to have an overview (a documentation?) of users and Teams of Projects. Terraform is a great tool for Infrastructure as Code – which not only allows providing infrastructure on demand, but also gives us nice documentation which can be versioned control in a source control system. The workflow kind of looks like following:

GitOps

I am developing a Terraform Provider for Azure DevOps that helps me use Terraform for provisioning Azure DevOps projects, Teams and members. In this article I will share how I am building it.

Note
This provider doesn't implement the complete set of 
Azure DevOps REST APIs. 
Its limited to only projects, teams and member associations. 
It's not recommended to use it in production scenarios.

Terraform Provider

Terrafom is an amazing tool that lets you define your infrastructure as code. Under the hood it’s an incredibly powerful state machine that makes API requests and marshals resources. Terraform has lots of providers – almost for every major cloud – out there. Including many other systems – like Kubernetes, Palo-Alto Networks etc.

In nutshell if any system has REST API that can be manipulated with Terraform Provider. Azure DevOps also has a terraform provider – which doesn’t currently provide resources to create Teams and members. Hence, I am writing my own – shamelessly using/stealing the Microsoft’s Terraform provider (referenced above) for project creation.

Setting up GO Environment

Terraform Providers and plugins are binaries that Terraform communicates during runtime via RPC. It’s theoretically possible to write a provider in any language, but to be honest, I haven’t come across any providers that were written other languages than GO. Terraform provide helper libraries in Go to aid in writing and testing providers.

I am developing in Windows 10 and didn’t want to install GO on my local machine. Containers come to rescue of course. I am using the “Remote development” extension in VS Code. This extension allows me to keep the source code in local machine and compile, build the source code in a container like Magic!

remote

Figure: Remote Development extension in VSCode – running container to build local repository.

Creating the provider

To create a Terraform provider we need to write the logic for managing the Creation, Reading, Updating and Deletion (CRUD) of a resource (i.e. Azure DevOps project, Team and members in this scenario) and Terraform will take care of the rest; state, locking, templating language and managing the lifecycle of the resources. Here in this repository I have a minimum implementation that supports creating Azure DevOps projects, Teams and its members.

First of all we define our provider and resources in main.go file.

package main
import (
"github.com/hashicorp/terraform/plugin"
"github.com/hashicorp/terraform/terraform"
)
func main() {
plugin.Serve(&plugin.ServeOpts{
ProviderFunc: func() terraform.ResourceProvider {
return Provider()
},
})
}

view raw
main.go
hosted with ❤ by GitHub

Next to that, we will define the provider schema (the attributes it supports as input and outputs, resources etc.)

package main
import (
"github.com/hashicorp/terraform/helper/schema"
)
// Provider – The top level Azure DevOps Provider definition.
func Provider() *schema.Provider {
p := &schema.Provider{
ResourcesMap: map[string]*schema.Resource{
"azuredevops_pipeline": resourcePipeline(),
"azuredevops_project": resourceProject(),
"azuredevops_team": resourceTeam(),
"azuredevops_serviceendpoint": resourceServiceEndpoint(),
},
Schema: map[string]*schema.Schema{
"org_service_url": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("AZDO_ORG_SERVICE_URL", nil),
Description: "The url of the Azure DevOps instance which should be used.",
},
"personal_access_token": {
Type: schema.TypeString,
Required: true,
DefaultFunc: schema.EnvDefaultFunc("AZDO_PERSONAL_ACCESS_TOKEN", nil),
Description: "The personal access token which should be used.",
},
},
}
p.ConfigureFunc = providerConfigure(p)
return p
}
func providerConfigure(p *schema.Provider) schema.ConfigureFunc {
return func(d *schema.ResourceData) (interface{}, error) {
client, err := getAzdoClient(d.Get("personal_access_token").(string), d.Get("org_service_url").(string))
return client, err
}
}

view raw
provider.go
hosted with ❤ by GitHub

We are using Azure DevOps personal Access token to communicate to the Azure DevOps REST API. The GO client for Azure DevOps from Microsoft – which is used as dependency, immensely simplified the implementation and also helped learning the flow.

Now defining the “team” resource as following:

package main
import (
"fmt"
"github.com/microsoft/terraform-provider-azuredevops/utils/converter"
"github.com/microsoft/terraform-provider-azuredevops/utils/tfhelper"
"os"
"log"
"github.com/google/uuid"
"github.com/hashicorp/terraform/helper/schema"
"github.com/microsoft/azure-devops-go-api/azuredevops/core"
"github.com/microsoft/azure-devops-go-api/azuredevops/graph"
)
func resourceTeam() *schema.Resource {
return &schema.Resource{
Create: resourceTeamCreate,
Read: resourceTeamRead,
Update: resourceTeamUpdate,
Delete: resourceTeamDelete,
//https://godoc.org/github.com/hashicorp/terraform/helper/schema#Schema
Schema: map[string]*schema.Schema{
"project_id": &schema.Schema{
Type: schema.TypeString,
Required: true,
DiffSuppressFunc: tfhelper.DiffFuncSupressCaseSensitivity,
},
"name": &schema.Schema{
Type: schema.TypeString,
ForceNew: true,
Required: true,
DiffSuppressFunc: tfhelper.DiffFuncSupressCaseSensitivity,
},
"description": &schema.Schema{
Type: schema.TypeString,
Optional: true,
Default: "",
},
"members": {
Type: schema.TypeSet,
Optional: true,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"name": {
Type: schema.TypeString,
Required: true,
},
"admin": {
Type: schema.TypeBool,
Required: true,
Sensitive: true,
},
},
},
},
},
}
}

view raw
resource_team.go
hosted with ❤ by GitHub

That’s all for declaring, now implementing the CRUD methods in resource providers. The full source code is in GitHub.

We can compile the provider application using following command:

> GOOS=windows GOARCH=amd64 go build -o terraform-provider-azuredevops.exe

As I am using Dabian docker image for GoLang I need to specify my target OS (GOOS=windows) and CPU Architecture (GOARCH=amd64) when I build the provider. This will produce the terraform provider for Azure DevOps executable.

Although it’s executable, it’s not meant to launch directly from command prompt. Instead, I will copy it to “%APPDATA%\ terraform.d\plugins\windows_amd64” folder of my machine.

Terraform Script for Azrue DevOps

Now we can write the Terraform file (.tf) that will describe the Azure DevOps Project, Team and members etc.

Terraform

With this terraform file, we can now launch the following command to initialize our terraform environment.

init

The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control.

Terraform plan

The terraform plan command is used to create an execution plan. Terraform performs a refresh, and then determines what actions are necessary to achieve the desired state specified in the configuration files. This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. For example, terraform plan might be run before committing a change to version control, to create confidence that it will behave as expected.

PLan

Figure: terraform plan output – shows exactly what is going to happen if we apply these changes to Azure DevOps

Terraform apply

The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan. We will launch it with an “-auto-approve” flag to assert the approval prompt.

apply

Now we can go to our Azure DevOps and sure enough there’s a new project created with the configuration as we scripted in Terraform file.

Taking it further

Now we can check in the terraform file (main.tf above) into an Azure DevOps repository and put a Branch policy to it. That will force any changes (such as creating new projects, adding removing team members) would requrie a Pull-Request and needs to be reviewed by peers (four-eyes principles). Once Pull-Requests are approved, a simple Azure Pipeline can trigger that does the terraform apply. And I have my workflow automated  and I also have nice histories in GIT – which records the purpose of any changes made in past.

Thanks for reading!

Linkerd in Azure Kubernetes Service cluster

In this article I would document my journey on setting up Linkerd Service Mesh on Azure Kubernetes service.

Background

I have a tiny Kubernetes cluster. I run some workload there, some are useful, others are just try-out, fun stuffs. I have few services that need to talk to each other. I do not have a lot of traffic to be honest, but I sometimes curiously run Apache ab to simulate load and see how my services perform under stress. Until very recently I was using a messaging (basically a pub-sub) pattern to create reactive service-to-service communication. Which works great, but often comes with a latency. I can only imagine, if I were to run these service to service communication for a mission critical high-traffic performance-driven scenario (an online game for instance), this model won’t fly well. There comes the need for a service-to-service communication pattern in cluster.

What’s big deal? We can have REST calls between services, even can implement gRPC for that matter. The issue is things behaves different at scale. When many services talks to many others, nodes fail in between, network address of PODs changes, new PODs show up, some goes down, figuring out where the service sits becomes quite a challenging task.

Then Kubernetes comes to rescue, Kubernetes provides “service”, that gives us service discovery out of the box. Which is awesome. Not all issues disappeared though. Services in a cluster need fault-tolerances, traceability and most importantly, “observability”.  Circuit-breakers, retry-logics etc. implementing them for each service is again a challenge. This is exactly the Service Mesh addresses.

Service mesh

From thoughtworks radar:

Service mesh is an approach to operating a secure, fast and reliable microservices ecosystem. It has been an important steppingstone in making it easier to adopt microservices at scale. It offers discovery, security, tracing, monitoring and failure handling. It provides these cross-functional capabilities without the need for a shared asset such as an API gateway or baking libraries into each service. A typical implementation involves lightweight reverse-proxy processes, aka sidecars, deployed alongside each service process in a separate container. Sidecars intercept the inbound and outbound traffic of each service and provide cross-functional capabilities mentioned above.

Some of us might remember Aspect Oriented programming (AOP) – where we used to separate cross cutting concerns from our core-business-concerns. Service mesh is no different. They isolate (in a separate container) these networking and fault-tolerance concerns from the core-capabilities (also running in container).

Linkerd

There are quite several service mesh solutions out there – all suitable to run in Kubernetes. I have used earlier Envoy and Istio. They work great in Kubernetes as well as VM hosted clusters. However, I must admit, I developed a preference for Linkerd since I discovered it. Let’s briefly look at how Linkerd works. Imagine the following two services, Service A and Service B. Service A talks to Service B.

service-2-service

When Linkerd installed, it works like an interceptor between all the communication between services. Linkerd uses sidecar pattern to proxy the communication by updating the KubeProxy IP Table.

Linkerd-architecture.png

Linkerd implants two sidecar containers in our PODs. The init container configures the IP table so the incoming and outgoing TCP traffics flow through the Linkerd Proxy container. The proxy container is the data plane that does the actual interception and all the other fault-tolerance goodies.

Primary reason behind my Linkerd preferences are performance and simplicity. Ivan Sim has done performance benchmarking with Linkerd and Istio:

Both the Linkerd2-meshed setup and Istio-meshed setup experienced higher latency and lower throughput, when compared with the baseline setup. The latency incurred in the Istio-meshed setup was higher than that observed in the Linkerd2-meshed setup. The Linkerd2-meshed setup was able to handle higher HTTP and GRPC ping throughput than the Istio-meshed setup.

Cluster provision

Spinning up AKS is easy as pie these days. We can use Azure Resource Manager Template or Terraform for that. I have used Terraform to generate that.

resource "azurerm_resource_group" "cloudoven" {
name = "cloudoven"
location = "West Europe"
}
resource "azurerm_kubernetes_cluster" "cloudovenaks" {
name = "cloudovenaks"
location = "${azurerm_resource_group.cloudoven.location}"
resource_group_name = "${azurerm_resource_group.cloudoven.name}"
dns_prefix = "cloudovenaks"
agent_pool_profile {
name = "default"
count = 1
vm_size = "Standard_D1_v2"
os_type = "Linux"
os_disk_size_gb = 30
}
agent_pool_profile {
name = "pool2"
count = 1
vm_size = "Standard_D2_v2"
os_type = "Linux"
os_disk_size_gb = 30
}
service_principal {
client_id = "98e758f8r-f734-034a-ac98-0404c500e010"
client_secret = "Jk==3djk(efd31kla934-=="
}
tags = {
Environment = "Production"
}
}
output "client_certificate" {
value = "${azurerm_kubernetes_cluster.cloudovenaks.kube_config.0.client_certificate}"
}
output "kube_config" {
value = "${azurerm_kubernetes_cluster.cloudovenaks.kube_config_raw}"
}

view raw
Kuberentes-iac
hosted with ❤ by GitHub

Service deployment

This is going to take few minutes and then we have a cluster. We will use the canonical emojivoto app (“buoyantio/emojivoto-emoji-svc:v8”) to test our Linkerd installation. Here’s the Kubernetes manifest file for that.

apiVersion: v1
kind: Namespace
metadata:
name: emojivoto
kind: ServiceAccount
apiVersion: v1
metadata:
name: emoji
namespace: emojivoto
kind: ServiceAccount
apiVersion: v1
metadata:
name: voting
namespace: emojivoto
kind: ServiceAccount
apiVersion: v1
metadata:
name: web
namespace: emojivoto
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: emoji
namespace: emojivoto
spec:
replicas: 1
selector:
matchLabels:
app: emoji-svc
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: emoji-svc
spec:
serviceAccountName: emoji
containers:
env:
name: GRPC_PORT
value: "8080"
image: buoyantio/emojivoto-emoji-svc:v8
name: emoji-svc
ports:
containerPort: 8080
name: grpc
resources:
requests:
cpu: 100m
status: {}
apiVersion: v1
kind: Service
metadata:
name: emoji-svc
namespace: emojivoto
spec:
selector:
app: emoji-svc
clusterIP: None
ports:
name: grpc
port: 8080
targetPort: 8080
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: voting
namespace: emojivoto
spec:
replicas: 1
selector:
matchLabels:
app: voting-svc
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: voting-svc
spec:
serviceAccountName: voting
containers:
env:
name: GRPC_PORT
value: "8080"
image: buoyantio/emojivoto-voting-svc:v8
name: voting-svc
ports:
containerPort: 8080
name: grpc
resources:
requests:
cpu: 100m
status: {}
apiVersion: v1
kind: Service
metadata:
name: voting-svc
namespace: emojivoto
spec:
selector:
app: voting-svc
clusterIP: None
ports:
name: grpc
port: 8080
targetPort: 8080
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: web
namespace: emojivoto
spec:
replicas: 1
selector:
matchLabels:
app: web-svc
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: web-svc
spec:
serviceAccountName: web
containers:
env:
name: WEB_PORT
value: "80"
name: EMOJISVC_HOST
value: emoji-svc.emojivoto:8080
name: VOTINGSVC_HOST
value: voting-svc.emojivoto:8080
name: INDEX_BUNDLE
value: dist/index_bundle.js
image: buoyantio/emojivoto-web:v8
name: web-svc
ports:
containerPort: 80
name: http
resources:
requests:
cpu: 100m
status: {}
apiVersion: v1
kind: Service
metadata:
name: web-svc
namespace: emojivoto
spec:
type: LoadBalancer
selector:
app: web-svc
ports:
name: http
port: 80
targetPort: 80
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: vote-bot
namespace: emojivoto
spec:
replicas: 1
selector:
matchLabels:
app: vote-bot
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: vote-bot
spec:
containers:
command:
emojivoto-vote-bot
env:
name: WEB_HOST
value: web-svc.emojivoto:80
image: buoyantio/emojivoto-web:v8
name: vote-bot
resources:
requests:
cpu: 10m
status: {}

view raw
emoji-manifest.yml
hosted with ❤ by GitHub

With this IaC – we can run Terraform apply to provision our AKS cluster in Azure.

Azure Pipeline

Let’s create a pipeline for the service deployment. The easiest way to do that is to create a service connection to our AKS cluster. We go to the project settings in Azure DevOps project, pick Service connections and create a new service connection of type “Kubernetes connection”.

Azure DevOps connection

Installing Linkerd

We will create a pipeline that installs Linkerd into the AKS cluster. Azure Pipeline now offers “pipeline-as-code” – which is just an YAML file that describes the steps need to be performed when the pipeline is triggered. We will use the following pipeline-as-code:

pool:
name: Hosted Ubuntu 1604
steps:
task: KubectlInstaller@0
displayName: 'Install Kubectl latest'
task: Kubernetes@1
displayName: 'kubectl get'
inputs:
kubernetesServiceEndpoint: CloudOvenKubernetes
command: get
arguments: nodes
script: |
curl -sL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin
linkerd version
linkerd check –pre
linkerd install | kubectl apply -f –
linkerd check
displayName: 'Linkerd – Installation'

We can at this point trigger the pipeline to install Linkerd into the AKS cluster.

Linkerd installation (2)

Deployment of PODs and services

Let’s create another pipeline as code that deploys all the services and deployment resources to AKS using the following Kubernetes manifest file:

pool:
name: Hosted Ubuntu 1604
steps:
task: KubectlInstaller@0
displayName: 'Install Kubectl latest'
task: Kubernetes@1
displayName: 'kubectl apply'
inputs:
kubernetesServiceEndpoint: CloudOvenKubernetes
command: apply
useConfigurationFile: true
configuration: src/services/emojivoto/all.yml

In Azure Portal we can already see our services running:

Azure KS

Also in Kubernetes Dashboard:

Kub1

We have got our services running – but they are not really affected by Linkerd yet. We will add another step into the build pipeline to tell Linkerd to do its magic.

pool:
name: Hosted Ubuntu 1604
steps:
task: KubectlInstaller@0
displayName: 'Install Kubectl latest'
task: Kubernetes@1
displayName: 'kubectl apply'
inputs:
kubernetesServiceEndpoint: CloudOvenKubernetes
command: apply
useConfigurationFile: true
configuration: src/services/emojivoto/all.yml
script: 'src/services/emojivoto/all.yml | linkerd inject – | kubectl apply -f –'
displayName: 'Inject Linkerd'

Next thing, we trigger the pipeline and put some traffic into the service that we have just deployed. The emoji service is simulating some service to service invocation scenarios and now it’s time for us to open the Linkerd dashboard to inspect all the distributed traces and many other useful matrix to look at.

linkerd-censored

We can also see kind of an application map – in a graphical way to understand which service is calling who and what is request latencies etc.

linkerd-graph

Even fascinating, Linkerd provides some drill-down to the communications in Grafana Dashboard.

ezgif.com-gif-maker.gif

Conclusion

I have enjoyed a lot setting it up and see the outcome and wanted to share my experience with it. If you are looking into Service Mesh and read this post, I strongly encourage to give Linkerd a go, it’s awesome!

Thanks for reading.

Continuously deploy Blazor SPA to Azure Storage static web site

Lately I am learning ASP.net Blazor – the relatively new UI framework from Microsoft. Blazor is just awesome – the ability to write c# code both in server and client side is extremely productive for .net developers. From Blazor documentations:

Blazor lets you build interactive web UIs using C# instead of JavaScript. Blazor apps are composed of reusable web UI components implemented using C#, HTML, and CSS. Both client and server code is written in C#, allowing you to share code and libraries.

I wanted to write a simple SPA (Single Page Application) and run it as server-less. Azure Storage offers hosting static web sites for quite a while now. Which seems like a very nice option to run a Blazor SPA which executes into the user’s browser (within the same Sandbox as JavaScript does). It also a cheap way to run a Single Page application in Cloud.

I am using GitHub as my source repository for free (in a private repository). Today wanted to create a pipeline that will continuously deploy my Blazor app to the storage account. Azure Pipelines seems to have pretty nice integration with GitHub and it’s has a free tier as well . If either our GitHub repository or pipeline is private, Azure Pipeline still provide a free tier. In this tier, one can run one free parallel job that can run up to 60 minutes each time until we’ve used 1800 minutes per month. That’s pretty darn good for my use case.

I also wanted to build the project many times in my local machine (while developing) in the same way it gets built in the pipeline. Being a docker fan myself, that’s quite no-brainer. Let’s get started.

Pre-requisite

I have performed few steps before I ran after the pipeline – that are beyond the scope of this post.

  • I have created an Azure Subscription
  • Provisioned resource groups and storage account
  • I have created a Service Principal and granted Contributor role to the storage account

Publishing in Docker

I have created a docker file that will build the app, run unit tests and if all goes well, it will publish the app in a folder. All of these are standard dotnet commands.
Once, we have the application published in a folder, I have taken the content of that folder to a Azure CLI docker base image (where CLI is pre-installed) and thrown away the rest of the intermediate containers.

Here’s our docker file:

FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
# Build the app
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY . .
WORKDIR "/src/BlazorApp"
RUN dotnet restore "BlazorApp.csproj"
RUN dotnet build "BlazorApp.csproj" -c Release -o /app
# Run the unit tests
RUN dotnet restore "BlazorAppTest.csproj"
RUN dotnet build "BlazorAppTest.csproj"
RUN dotnet test "BlazorAppTest.csproj"
# Publish the app
FROM build AS publish
RUN dotnet publish "BlazorApp.csproj" -c Release -o /app
FROM microsoft/azure-cli AS final
ARG AppID
ARG AppSecret
ARG TenantID
ARG StorageAccountName
WORKDIR /app
COPY –from=publish /app .
WORKDIR "/app/BlazorApp/dist"
RUN az login –service-principal –username $AppID –password $AppSecret –tenant $TenantID
RUN az storage blob delete-batch –account-name $StorageAccountName –source "\$web"
RUN az storage blob upload-batch –account-name $StorageAccountName -s . -d "\$web"

The docker file expects few arguments (basically the service principal ID, the password of the service principal and the Azure AD tenant ID – these are required for Azure CLI to sign-in to my Azure subscription). Here’s how we can build this image now:

docker build
-f Blazor.Dockerfile
–build-arg StorageAccountName=<Storage-Account-Name>
–build-arg AppID=<APP GUID>
–build-arg AppSecret=<PASSWORD>
–build-arg TenantID=<Azure AD TENANT ID>
-t blazor-app .

view raw
docker-run.sh
hosted with ❤ by GitHub

Azure Pipeline as code

We have now the container, time to run it every time a commit has been made to the GitHub repository. Azure Pipeline has a yaml format to define pipeline-as-code – which is another neat feature of Azure Pipelines.

Let’s see how the pipeline-as-code looks like:

trigger:
master
pool:
name: Hosted Ubuntu 1604
steps:
task: Docker@0
displayName: 'Build and release Blazor to Storage Account'
inputs:
dockerFile: src/Blazor.Dockerfile
buildArguments: |
StorageAccountName=$(StorageAccountName)
AppID=$(AppID)
AppSecret=$(AppSecret)
TenantID=$(TenantID)

view raw
pipeline.yaml
hosted with ❤ by GitHub

I have committed this to the same repository into the root folder.

Creating the pipeline

We need to login to Azure DevOps and create a project (if there’s none). From the build option we can create a new build definition.

ado

The steps to create build definition is very straightforward. It allows us to directly point to a GitHub repository that we want to build.
Almost there. We need to supply the service principal ID, password, tenant ID and storage account names to this pipeline – because both our docker file and the pipeline-as-code expected them as dependencies. However, we can’t just put their values and commit them to GitHub. They should be kept secret.

Azure Pipeline Secret variables

Azure Pipeline allows us to define secret variable for a pipeline. We need to open the build definition in “edit” mode and then go to the top-right ellipses button as below:

ado1

Now we can define the values of these secret and keep them hidden (there’s a lock icon there).

ad02

That’s all what we need. Ready to deploy the code to Azure storage via this pipeline. If we now go an make a change in our repository it will trigger the pipeline and sure enough it will build-test-publish-deploy to Azure storage as a Static SPA.

Thanks for reading!

Continuously deliver changes to Azure API management service with Git Configuration Repository

What is API management

Publishing data, insights and business capabilities via API in a unified way can be challenging at times. Azure API management (APIM) makes it simpler than ever.

Businesses everywhere are looking to extend their operations as a digital platform, creating new channels, finding new customers and driving deeper engagement with existing ones. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. You can use Azure API Management to take any backend and launch a full-fledged API program based on it. [Source]

The challenge – Continuous Deployment

These days, it’s very common to have many distributed services (let’s say Micro service) publish APIs in a mesh up Azure API management portal. For instance, Order and Invoice APIs are published over an E-Commerce API portal, although they are backed by isolated Order and Invoice Micro services. Autonomous teams build these APIs, often work in isolation’s but their API specifications (mostly Open API specification Swagger documents) must be published through a shared API management Service. Different teams with different release cadence can make the continuous deployment of API portal challenging and error prone.

Azure API management ships bunch of Power Shell cmdlets (i.e. Import-AzureRmApiManagementApi  and Publish-AzureRmApiManagementTenantGitConfiguration ) that allow deploying the API documentation directly to APIM. Which works great for single API development team. It gets a bit trickier when multiple teams are pushing changes to a specific APIM instance like the example above. Every team needs to have deployment credentials in their own release pipelines – which might undesirable for a Shared APIM instance. Centrally governing these changes becomes difficult.

APIM Configuration Git Repository

APIM instance has a pretty neat feature. Each APIM instance has a configuration database associated as a Git Repository, containing the metadata and configuration information for the APIM instance. We can clone the configuration repository and push changes back- using our very familiar Git commands and tool sets and APIM allows us to publish those changes that are pushed – sweet!

This allows us downloading different versions of our APIM configuration state. Managing bulk APIM configurations (this includes, API specifications, Products, Groups, Policies and branding styles etc.) in one central repository with very familiar Git tools, is super convenient.

The following diagram shows an overview of the different ways to configure your API Management service instance.

api-management-git-configure

[Source]

This sounds great! However, we will leverage this capability and make it even nicer, where multiple teams can develop their API’s without depending on others release schedules and we can have a central release pipeline that publishes the changes from multiple API services.

Solution design

The idea is pretty straight forward. Each team develop their owner API specification and when they want to release, they create PR (Pull Request) to a shared Repository. Which contains the APIM configuration clone. Once peer reviewed the PR and merged, the release pipeline kicks in. Which deploys the changes to Azure APIM.

The workflow looks like following:

workflow
Development and deployment workflow

Building the solution

We will provision a APIM instance on Azure. We can do that with an ARM template (We will not go into the details of that, you can use this GitHub template ).

Once we have APIM provisioned, we can see the Git Repository is not yet synchronized with the Configuration Database. (notice Out  of sync in the following image)

Out of sync

We will sync it and clone a copy of the configuration database in our local machine using the following Power Shell script. (You need to run Login-AzureRMAccount in Power Shell console, if you are not already logged in to Azure).

$context = New-AzureRmApiManagementContext `
        -ResourceGroupName $ResourceGroup `
        -ServiceName $ServiceName
    Write-Output "Initializing context...Completed"

    Write-Output "Syncing Git Repo with current API management state..."
    Save-AzureRmApiManagementTenantGitConfiguration `
        -Context $context `
        -Branch 'master' `
        -PassThru -Force

This will make the Git Repository synced.

Sync

To clone the repository to local machine, we need to generate Git Credentials first. Let’s do that now:

Function ExecuteGitCommand {
    param
    (
        [System.Object[]]$gitCommandArguments
    )

    $gitExePath = "C:\Program Files\git\bin\git.exe"
    & $gitExePath $gitCommandArguments
}

 

$expiry = (Get-Date) + '1:00:00'
    $parameters = @{
        "keyType" = "primary"
        "expiry"  = ('{0:yyyy-MM-ddTHH:mm:ss.000Z}' -f $expiry)
    }

    $resourceId = '/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.ApiManagement/service/{2}/users/git' -f $SubscriptionId, $ResourceGroup, $ServiceName

    if ((Test-Path -Path $TempDirectory )) {
        Remove-Item $TempDirectory -Force -Recurse -ErrorAction "Stop"
    }

    $gitRemoteSrcPath = Join-Path -Path $TempDirectory -ChildPath 'remote-api-src'

    Write-Output "Retrieving Git Credentials..."
    $gitUsername = 'apim'
    $gitPassword = (Invoke-AzureRmResourceAction `
            -Action 'token' `
            -ResourceId $resourceId `
            -Parameters $parameters `
            -ApiVersion '2016-10-10' `
            -Force).Value
    $escapedGitPassword = [System.Uri]::EscapeDataString($gitPassword)
    Write-Output "Retrieving Git Credentials...Completed"

    $gitRepositoryUrl = 'https://{0}:{1}@{2}.scm.azure-api.net/' -f $gitUsername, $escapedGitPassword, $ServiceName
    ExecuteGitCommand -gitCommandArguments @("clone", "$gitRepositoryUrl", "$gitRemoteSrcPath")

Now, we have a copy of the Git in our local machine. This is just a mirror of our APIM configuration database. We will create a repository in our Source Control (I am using VSTS). This will be our Shared APIM source repository. Every team will issue Pull Request with their API Specification into this repository. Which can be approved by other peers and eventually merged to master branch.

Building the release pipeline

Time to deploy changes from our Shared Repository to APIM instance. We will require following steps to perform:

  1. Sync the configuration database to APIM Git Repository.
  2. Clone the latest changes to our Build agent.
  3. Copy all updated API specifications, approved and merged to our VSTS repository’s master branch to the cloned repository.
  4. Commit all changes to the cloned repository.
  5. Push changes from clone repository to origin.
  6. Publish changes from Git Repository to APIM instance.

I have compiled a single Power Shell script that does all these steps- in that order. Idea is to, use this Power Shell script in our release pipeline to deploy releases to APIM. The complete scripts is given below:

# This script should be used into the build pipeline to update API changes
# to Azure API management Service.
# The steps that are scripted below:
#
# 1. Updates the Internal Git Repo with the published API documents in Azure
# 2. Retrieves the Git Credentails of internal Repo
# 3. Clones the repo in to a temporary folder
# 4. Merges changes from VSTS repo to the temporary cloned repository
# 5. Commit dirty changes
# 6. Push changes to Azure Internal Repo (master branch)
# 7. Publishes the API updates from internal Repo to Azure API management service.
#
# Example Usage:
#
# Parameters:
# – SubscriptionId: The subscription ID where the API management Service is provisioned
# – ResourceGroup: Resource group name
# – ServiceName: API management Service name
# – SourceDirectory: Directory where the API documents, policies etc. are located
# – TempDirectory: A temporary directory where the data will be downloaded
# – UserName: $ENV:RELEASE_DEPLOYMENT_REQUESTEDFOREMAIL – The user name that will be used to commit to Git. You can use Relase triggerd here
# – UserEmailAddress: $ENV:RELEASE_DEPLOYMENT_REQUESTEDFOR The email address of the user
# – CommitMessage: $ENV:RELEASE_RELEASEWEBURL The commit message
#
# Get-AzureRMApiManagementGitRepo `
# -SubscriptionId "– Subscription ID — " `
# -ResourceGroup " — Resource group name — " `
# -ServiceName " — APIM name — " `
# -SourceDirectory "..\src\api-management" `
# -TempDirectory 'C:\Temp\apim' `
# -UserName "Moim Hossain" `
# -CommitMessage "Commited from CI"
# -UserEmailAddress "moim.hossain"
Function ExecuteGitCommand {
param
(
[System.Object[]]$gitCommandArguments
)
$gitExePath = "C:\Program Files\git\bin\git.exe"
& $gitExePath $gitCommandArguments
}
Function Copy-DirectoryContents {
param
(
[System.String]$SrcPath,
[System.String]$destPath
)
ROBOCOPY $SrcPath $destPath /MIR
}
Function Get-AzureRMApiManagementGitRepo {
param
(
[System.String]
$SubscriptionId,
[System.String]
$ResourceGroup,
[System.String]
$ServiceName,
[System.String]
$UserName,
[System.String]
$UserEmailAddress,
[System.String]
$CommitMessage,
[System.String]
$SourceDirectory,
[System.String]
$TempDirectory
)
Write-Output "Initializing context…"
$context = New-AzureRmApiManagementContext `
ResourceGroupName $ResourceGroup `
ServiceName $ServiceName
Write-Output "Initializing context…Completed"
Write-Output "Syncing Git Repo with current API management state…"
Save-AzureRmApiManagementTenantGitConfiguration `
Context $context `
Branch 'master' `
PassThru Force
Write-Output "Syncing Git Repo with current API management state…Completed"
$expiry = (Get-Date) + '1:00:00'
$parameters = @{
"keyType" = "primary"
"expiry" = ('{0:yyyy-MM-ddTHH:mm:ss.000Z}' -f $expiry)
}
$resourceId = '/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.ApiManagement/service/{2}/users/git' -f $SubscriptionId, $ResourceGroup, $ServiceName
if ((Test-Path Path $TempDirectory )) {
Remove-Item $TempDirectory Force Recurse ErrorAction "Stop"
}
$gitRemoteSrcPath = Join-Path Path $TempDirectory ChildPath 'remote-api-src'
Write-Output "Retrieving Git Credentials…"
$gitUsername = 'apim'
$gitPassword = (Invoke-AzureRmResourceAction `
Action 'token' `
ResourceId $resourceId `
Parameters $parameters `
ApiVersion '2016-10-10' `
Force).Value
$escapedGitPassword = [System.Uri]::EscapeDataString($gitPassword)
Write-Output "Retrieving Git Credentials…Completed"
$gitRepositoryUrl = 'https://{0}:{1}@{2}.scm.azure-api.net/' -f $gitUsername, $escapedGitPassword, $ServiceName
Write-Host "Performing Git clone… $gitRemoteSrcPath"
ExecuteGitCommand gitCommandArguments @("clone", "$gitRepositoryUrl", "$gitRemoteSrcPath")
Write-Host "Performing Git clone… Completed"
# Copy changes from Source Repository (VSTS) to Locally Cloned Repository
$apiSources = Join-Path Path $gitRemoteSrcPath ChildPath 'api-management'
Copy-DirectoryContents `
SrcPath $SourceDirectory `
destPath $apiSources
Set-Location $gitRemoteSrcPath
ExecuteGitCommand gitCommandArguments @("config", "user.email", $UserEmailAddress)
ExecuteGitCommand gitCommandArguments @("config", "user.name", $UserName)
ExecuteGitCommand gitCommandArguments @("add", "–all")
ExecuteGitCommand gitCommandArguments @("commit", "-m", $CommitMessage)
ExecuteGitCommand gitCommandArguments @("push")
Publish-AzureRmApiManagementTenantGitConfiguration `
Context $context `
Branch 'master' `
PassThru `
Verbose
}

view raw
APIM-Deployment.ps1
hosted with ❤ by GitHub

Final thoughts

The Git Repository model for deploying API specifications to a single APIM instance makes it extremely easy to manage. Despite the fact, we could have done this with Power Shell alone. But in multiple team scenario that gets messy pretty quick. Having a centrally leading Git Repository as release gateway (and the only way to make any changes to APIM instance) reduces the complexity to minimum.

Azure Web App – Removing IP Restrictions

Azure Web App allows us to configure IP Restrictions (same goes for Azure Functions, API apps) . This allows us to define a priority ordered allow/deny list of IP addresses as access rules for our app. The allow list can include IPv4 and IPv6 addresses.

IP restrictions flow

Source: MSDN

Developers often run into scenarios when they want to do programmatic manipulations in these restriction rules. Adding or removing IP restrictions from Portal is easy and documented here. We can also manipulate them with ARM templates, like following:


"ipSecurityRestrictions": [
{
"ipAddress": "131.107.159.0/24",
"action": "Allow",
"tag": "Default",
"priority": 100,
"name": "allowed access"
}
],

However, sometimes it’s handy to do this in Power Shell scripts – that can be executed as a Build/Release task in CI/CD pipeline or other environments – when we can add IP restrictions with some scripts and/or remove some restriction rules. Google finds quite some blog posts that show how to add IP restrictions, but not a lot for removing a restriction.

In this post, I will present a complete Power Shell script that will allows us do the following:

  • Add an IP restriction
  • View the IP restrictions
  • Remove all IP Restrictions

Add-AzureRmWebAppIPRestrictions

function Add-AzureRmWebAppIPRestrictions {
    Param(
        $WebAppName,
        $ResourceGroupName,
        $IPAddress,
        $Mask
    )

    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]
    $WebAppConfig = (Get-AzureRmResource -ResourceType Microsoft.Web/sites/config -ResourceName $WebAppName -ResourceGroupName $ResourceGroupName -ApiVersion $APIVersion)
    $IpSecurityRestrictions = $WebAppNameConfig.Properties.ipsecurityrestrictions

    if ($ipAddress -in $IpSecurityRestrictions.ipAddress) {
        "$IPAddress is already restricted in $WebAppName."
    }
    else {
        $webIP = [PSCustomObject]@{ipAddress = ''; subnetMask = ''; Priority = 300}
        $webIP.ipAddress = $ipAddress
        $webIP.subnetMask = $Mask
        if($null -eq $IpSecurityRestrictions){
            $IpSecurityRestrictions = @()
        }

        [System.Collections.ArrayList]$list = $IpSecurityRestrictions
        $list.Add($webIP) | Out-Null

        $WebAppConfig.properties.ipSecurityRestrictions = $list
        $WebAppConfig | Set-AzureRmResource  -ApiVersion $APIVersion -Force | Out-Null
        Write-Output "New restricted IP address $IPAddress has been added to WebApp $WebAppName"
    }
}

Get-AzureRmWebAppIPRestrictions

function Get-AzureRmWebAppIPRestrictions {
    param
    (
        [string] $WebAppName,
        [string] $ResourceGroupName
    )
    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]

    $WebAppConfig = (Get-AzureRmResource -ResourceType Microsoft.Web/sites/config -ResourceName  $WebAppName -ResourceGroupName $ResourceGroupName -ApiVersion $APIVersion)
    $IpSecurityRestrictions = $WebAppConfig.Properties.ipsecurityrestrictions
    if ($null -eq $IpSecurityRestrictions) {
        Write-Output "$WebAppName has no IP restrictions."
    }
    else {
        Write-Output "$WebAppName IP Restrictions: "
        $IpSecurityRestrictions
    }
}

Remove-AzureRmWebAppIPRestrictions

function  Remove-AzureRmWebAppIPRestrictions {
    param (
        [string]$WebAppName,
        [string]$ResourceGroupName
    )
    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]

    $r = Get-AzureRmResource -ResourceGroupName $ResourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName "$WebAppName/web" -ApiVersion $APIVersion
    $p = $r.Properties
    $p.ipSecurityRestrictions = @()
    Set-AzureRmResource -ResourceGroupName  $ResourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName "$WebAppName/web" -ApiVersion $APIVersion -PropertyObject $p -Force
}
And finally, to test them:
function  Test-Everything {
    if (!(Get-AzureRmContext)) {
        Write-Output "Please login to your Azure account"
        Login-AzureRmAccount
    }

    Get-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"

    Remove-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name" 

    Set-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"  -IPAddress "192.51.100.0/24" -Mask ""

    Get-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"
}

Test-Everything
Thanks for reading!

Deploying Azure web job written in .net core

Lately I have written a .net core web job and wanted to publish it via CD (continuous deployment) from Visual Studio Online. Soon I figured, Azure Web Job SDK doesn’t support (yet) .net core. The work I expected will take 10 mins took about an hour.

If you are also figuring out this, this blog post is what you are looking for.

I will describe the steps and provide a PowerShell script that does the deployment via Kudu API. Kudu is the Source Control management for Azure app services, which has a Zip API that allows us to deploy zipped folder into an Azure app service.

Here are the steps you need to follow. You can start by creating a simple .net core console application. Add a Power Shell file into the project that will do the deployment in your Visual Studio online release pipeline. The Power Shell script will do the following:

  • Publish the project (using dotnet publish)
  • Make a zip out of the artifacts
  • Deploy the zip into the Azure web app

Publishing the project

We will use dotnet publish command to publish our project.

$resourceGroupName = "my-regource-group"
$webAppName = "my-web-job"
$projectName = "WebJob"
$outputRoot = "webjobpublish"
$ouputFolderPath = "webjobpublish\App_Data\Jobs\Continuous\my-web-job"
$zipName = "publishwebjob.zip"

$projectFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $projectName
$outputFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $ouputFolderPath
$outputFolderTopDir = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $outputRoot
$zipPath = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $zipName

if (Test-Path $outputFolder)
  { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}
$fullProjectPath = "$projectFolder\$projectName.csproj"

dotnet publish "$fullProjectPath"
     --configuration release --output $outputFolder

Create a compressed artifact folder

We will use System.IO.Compression.Filesystem assembly to create the zip file.

Add-Type -assembly "System.IO.Compression.Filesystem"
[IO.Compression.Zipfile]::CreateFromDirectory(
        $outputFolderTopDir, $zipPath)

Upload the zip into Azure web app

Next step is to upload the zip file into the Azure web app. This is where we first need to fetch the credentials for the Azure web app and then use the Kudu API to upload the content. Here’s the script:

function Get-PublishingProfileCredentials
         ($resourceGroupName, $webAppName) {

    $resourceType = "Microsoft.Web/sites/config"
    $resourceName = "$webAppName/publishingcredentials"

    $publishingCredentials = Invoke-AzureRmResourceAction `
                 -ResourceGroupName $resourceGroupName `
                 -ResourceType $resourceType `
                 -ResourceName $resourceName `
                 -Action list `
                 -ApiVersion 2015-08-01 `
                 -Force
    return $publishingCredentials
}

function Get-KuduApiAuthorisationHeaderValue
         ($resourceGroupName, $webAppName) {

    $publishingCredentials =
      Get-PublishingProfileCredentials $resourceGroupName $webAppName

    return ("Basic {0}" -f `
        [Convert]::ToBase64String( `
        [Text.Encoding]::ASCII.GetBytes(("{0}:{1}"
           -f $publishingCredentials.Properties.PublishingUserName, `
        $publishingCredentials.Properties.PublishingPassword))))
}

$kuduHeader = Get-KuduApiAuthorisationHeaderValue `
    -resourceGroupName $resourceGroupName `
    -webAppName $webAppName

$Headers = @{
    Authorization = $kuduHeader
}

# use kudu deploy from zip file
Invoke-WebRequest `
    -Uri https://$webAppName.scm.azurewebsites.net/api/zipdeploy `
    -Headers $Headers `
    -InFile $zipPath `
    -ContentType "multipart/form-data" `
    -Method Post

# Clean up the artifacts now
if (Test-Path $outputFolder)
      { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}

PowerShell task in Visual Studio Online

Now we can leverage the Azure PowerShell task in Visual Studio Release pipeline and invoke the script to deploy the web job.

That’s it!

Thanks for reading, and have a nice day!