Continuously deliver changes to Azure API management service with Git Configuration Repository

What is API management

Publishing data, insights and business capabilities via API in a unified way can be challenging at times. Azure API management (APIM) makes it simpler than ever.

Businesses everywhere are looking to extend their operations as a digital platform, creating new channels, finding new customers and driving deeper engagement with existing ones. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. You can use Azure API Management to take any backend and launch a full-fledged API program based on it. [Source]

The challenge – Continuous Deployment

These days, it’s very common to have many distributed services (let’s say Micro service) publish APIs in a mesh up Azure API management portal. For instance, Order and Invoice APIs are published over an E-Commerce API portal, although they are backed by isolated Order and Invoice Micro services. Autonomous teams build these APIs, often work in isolation’s but their API specifications (mostly Open API specification Swagger documents) must be published through a shared API management Service. Different teams with different release cadence can make the continuous deployment of API portal challenging and error prone.

Azure API management ships bunch of Power Shell cmdlets (i.e. Import-AzureRmApiManagementApi  and Publish-AzureRmApiManagementTenantGitConfiguration ) that allow deploying the API documentation directly to APIM. Which works great for single API development team. It gets a bit trickier when multiple teams are pushing changes to a specific APIM instance like the example above. Every team needs to have deployment credentials in their own release pipelines – which might undesirable for a Shared APIM instance. Centrally governing these changes becomes difficult.

APIM Configuration Git Repository

APIM instance has a pretty neat feature. Each APIM instance has a configuration database associated as a Git Repository, containing the metadata and configuration information for the APIM instance. We can clone the configuration repository and push changes back- using our very familiar Git commands and tool sets and APIM allows us to publish those changes that are pushed – sweet!

This allows us downloading different versions of our APIM configuration state. Managing bulk APIM configurations (this includes, API specifications, Products, Groups, Policies and branding styles etc.) in one central repository with very familiar Git tools, is super convenient.

The following diagram shows an overview of the different ways to configure your API Management service instance.

api-management-git-configure

[Source]

This sounds great! However, we will leverage this capability and make it even nicer, where multiple teams can develop their API’s without depending on others release schedules and we can have a central release pipeline that publishes the changes from multiple API services.

Solution design

The idea is pretty straight forward. Each team develop their owner API specification and when they want to release, they create PR (Pull Request) to a shared Repository. Which contains the APIM configuration clone. Once peer reviewed the PR and merged, the release pipeline kicks in. Which deploys the changes to Azure APIM.

The workflow looks like following:

workflow
Development and deployment workflow

Building the solution

We will provision a APIM instance on Azure. We can do that with an ARM template (We will not go into the details of that, you can use this GitHub template ).

Once we have APIM provisioned, we can see the Git Repository is not yet synchronized with the Configuration Database. (notice Out  of sync in the following image)

Out of sync

We will sync it and clone a copy of the configuration database in our local machine using the following Power Shell script. (You need to run Login-AzureRMAccount in Power Shell console, if you are not already logged in to Azure).

$context = New-AzureRmApiManagementContext `
        -ResourceGroupName $ResourceGroup `
        -ServiceName $ServiceName
    Write-Output "Initializing context...Completed"

    Write-Output "Syncing Git Repo with current API management state..."
    Save-AzureRmApiManagementTenantGitConfiguration `
        -Context $context `
        -Branch 'master' `
        -PassThru -Force

This will make the Git Repository synced.

Sync

To clone the repository to local machine, we need to generate Git Credentials first. Let’s do that now:

Function ExecuteGitCommand {
    param
    (
        [System.Object[]]$gitCommandArguments
    )

    $gitExePath = "C:\Program Files\git\bin\git.exe"
    & $gitExePath $gitCommandArguments
}

 

$expiry = (Get-Date) + '1:00:00'
    $parameters = @{
        "keyType" = "primary"
        "expiry"  = ('{0:yyyy-MM-ddTHH:mm:ss.000Z}' -f $expiry)
    }

    $resourceId = '/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.ApiManagement/service/{2}/users/git' -f $SubscriptionId, $ResourceGroup, $ServiceName

    if ((Test-Path -Path $TempDirectory )) {
        Remove-Item $TempDirectory -Force -Recurse -ErrorAction "Stop"
    }

    $gitRemoteSrcPath = Join-Path -Path $TempDirectory -ChildPath 'remote-api-src'

    Write-Output "Retrieving Git Credentials..."
    $gitUsername = 'apim'
    $gitPassword = (Invoke-AzureRmResourceAction `
            -Action 'token' `
            -ResourceId $resourceId `
            -Parameters $parameters `
            -ApiVersion '2016-10-10' `
            -Force).Value
    $escapedGitPassword = [System.Uri]::EscapeDataString($gitPassword)
    Write-Output "Retrieving Git Credentials...Completed"

    $gitRepositoryUrl = 'https://{0}:{1}@{2}.scm.azure-api.net/' -f $gitUsername, $escapedGitPassword, $ServiceName
    ExecuteGitCommand -gitCommandArguments @("clone", "$gitRepositoryUrl", "$gitRemoteSrcPath")

Now, we have a copy of the Git in our local machine. This is just a mirror of our APIM configuration database. We will create a repository in our Source Control (I am using VSTS). This will be our Shared APIM source repository. Every team will issue Pull Request with their API Specification into this repository. Which can be approved by other peers and eventually merged to master branch.

Building the release pipeline

Time to deploy changes from our Shared Repository to APIM instance. We will require following steps to perform:

  1. Sync the configuration database to APIM Git Repository.
  2. Clone the latest changes to our Build agent.
  3. Copy all updated API specifications, approved and merged to our VSTS repository’s master branch to the cloned repository.
  4. Commit all changes to the cloned repository.
  5. Push changes from clone repository to origin.
  6. Publish changes from Git Repository to APIM instance.

I have compiled a single Power Shell script that does all these steps- in that order. Idea is to, use this Power Shell script in our release pipeline to deploy releases to APIM. The complete scripts is given below:

Final thoughts

The Git Repository model for deploying API specifications to a single APIM instance makes it extremely easy to manage. Despite the fact, we could have done this with Power Shell alone. But in multiple team scenario that gets messy pretty quick. Having a centrally leading Git Repository as release gateway (and the only way to make any changes to APIM instance) reduces the complexity to minimum.

OpenSSL as Service

OpenSSL is awesome! Though, requires little manual work to remember all the commands, executing them in a machine that has OpenSSL installed. In this post, I’m about to build an HTTP API over OpenSSL, with the most commonly used commands (and the possibility to extend it further – as required). This will help folks who wants to run OpenSSL in a private network but wants to orchestrate it in their automation workflows.

Background

Ever wanted to automate the TLS (also known as SSL) configuration process for your web application? You know, the sites that served via HTTPS and Chrome shows a green “secure” mark in address bar. Serving site over HTTP is insecure (even for static contents) and major browsers will mark those sites as not secure, Chrome already does that today.

Serving contents via HTTPS involves buying a digital certificate (aka SSL/TLS certificate) from certificate authorities (CA). The process seemed complicated (sometimes expensive too) by many average site owners or developers. Let’s encrypt addressed this hardship and made it painless. It’s an open certificate authority that provides free TLS certificates in an automated and elegant way.

However, free certificates might not be ideal for enterprise scenarios. Enterprise might have a requirement to buy certificate from a specific CA. In many cases, that process is manual and often complicated and slow. Typically, the workflow starts by generating a Certificate Signing request (also known as CSR) which requires generating asymmetric key pair (a public and private key pair). Which is then sent to CA to get a Digital Identity certificate. This doesn’t stop here. Once the certificate is provided by the CA, sometimes (Specially if you are in IIS, .net or Azure world) it’s needed to be converted to a PFX (Personal Information Exchange) file to deploy the certificate to the web server.

PFX (aka PKCS #12) is a file format defines an archive file format for storing many cryptography objects as a single file. It’s used to bundle a private key with it’s X.509 certificate or bundling all the members of a chain of trust. This file may be encrypted and signed. The internal storage containers (aka SafeBags), may also be encrypted and signed.

Generating CSR, converting a Digital Identity certificate to PFX format are often done manually. There are some online services that allows you generating CSRs – via an API or an UI. These are very useful and handy, but not the best fit for an enterprise. Because the private keys need to be shared with the online provider – to generate the CSR. Which leads people to use the vastly popular utility – OpenSSL in their local workstation – generating CSRs. In this article, this is exactly what I am trying to avoid. I wanted to have an API over OpenSSL – so that I can invoke it from my other automation workflow running in the Cloud.

Next, we will see how we can expose the OpenSSL over HTTP API in a Docker container, so we can run the container in our private enterprise network and orchestrate this in our certificate automation workflows.

The Solution Design

We will write a .net core web app, exposing the OpenSSL command via web API. Web API requests will fork OpenSSL process with the command and will return the outcome as web API response.

OpenSSL behind .net core Web API

We are using System.Diagnostics.Process to lunch OpenSSL in our code. This is assuming we will have OpenSSL executable present in our path. Which we will ensure soon with Docker.

        private static StringBuilder ExecuteOpenSsl(string command)
        {
            var logs = new StringBuilder();
            var executableName = "openssl";
            var processInfo = new ProcessStartInfo(executableName)
            {
                Arguments = command,
                UseShellExecute = false,
                RedirectStandardError = true,
                RedirectStandardOutput = true,
                CreateNoWindow = true
            };

            var process = Process.Start(processInfo);
            while (!process.StandardOutput.EndOfStream)
            {
                logs.AppendLine(process.StandardOutput.ReadLine());
            }
            logs.AppendLine(process.StandardError.ReadToEnd());
            return logs;
        }

This is simply kicking off OpenSSL executable with a command and capturing the output (or errors). We can now use this in our Web API controller.

    /// <summary>
    /// The Open SSL API
    /// </summary>
    [Produces("application/json")]
    [Route("api/OpenSsl")]
    public class OpenSslController : Controller
    {
        /// <summary>
        /// Creates a new CSR
        /// </summary>
        /// Payload info
        /// The CSR with private key
        [HttpPost("CSR")]
        public async Task Csr([FromBody] CsrRequestPayload payload)
        {
            var response = await CertificateManager.GenerateCSRAsync(payload);
            return new JsonResult(response);
        }

This snippet only shows one example, where we are receiving a CSR generation request and using the OpenSSL to generate, returning the CSR details (in a base64 encoded string format) as API response.

Other commands are following the same model, so skipping them here.

Building Docker Image

Above snippet assumes that we have OpenSSL installed in the machine and the executable’s path is registered in our system’s path. We will turn that assumption to a fact by installing OpenSSL in our Docker image.

FROM microsoft/aspnetcore:2.0 AS base

RUN apt-get update -y
RUN apt-get install openssl

Here we are using aspnetcore:2.0 as our base image (which is a Linux distribution) and installing OpenSSL right after.

Let’s Run it!

I have built the docker image and published it to Docker Hub. All we need is to run it:

Untitled-1

The default port of the web API is 80, though in this example we will run it on 8080. Let’s open a browser pointing to:

http:localhost:8080/ 

Voila! We have our API’s. Here’s the Swagger UI for the web API.

swagger

And we can test our CSR generation API via Postman:

Postman

The complete code for this web app with Docker file can be found in this GitHub Repository. The Docker image is in Docker Hub.

Thanks for reading.

Resilient Azure Data Lake Analytics (ADLA) Jobs with Azure Functions

Azure Data Lake Analytics is an on-demand analytics job service that allows writing queries to transform data and grab insights efficiently. The analytics service can handle jobs of any scale instantly by setting the dial for how much power you need.

JObs

In many organizations, these jobs could play a crucial role and reliability of these job executions could be business critical. Lately I have encountered a scenario where a particular USQL job has failed with following error message:

Usql – Job failed due to internal system error – NM_CANNOT_LAUNCH_JM

A bit of research on Google revealed, it’s a system error, which doesn’t leave a lot of diagnostic clue to reason out. Retrying this job manually (by button clicking on portal) yielded success! Which makes it a bit unpredictable and uncertain. However, uncertainty like this is sort of norm while developing Software for Cloud. We all read/heard about Chaos Monkeys of Netflix.

What is resiliency?

Resiliency is the capability to handle partial failures while continuing to execute and not crash. In modern application architectures — whether it be micro services running in containers on-premises or applications running in the cloud — failures are going to occur. For example, applications that communicate over networks (like services talking to a database or an API) are subject to transient failures. These temporary faults cause lesser amounts of downtime due to timeouts, overloaded resources, networking hiccups, and other problems that come and go and are hard to reproduce. These failures are usually self-correcting. (Source)
Today I will present an approach that mitigated this abrupt job failure.

The Solution Design

Basically, I wanted to have a job progress watcher, waiting to see a failed job and then resubmit that job as a retry-logic. Also, don’t want to retry more than once, which has potential to repeat a forever-failure loop. I can have my watcher running at a frequency – like every 5 minutes or so.

Azure Functions

Azure Functions continuously impressing me for its lightweight built and consumption-based pricing model. Functions can run with different triggers, among them time schedule trigger- that perfectly fits my purpose.

Prerequisites

The function app needs to retrieve failed ADLA jobs and resubmit them as needed. This can be achieved with the Microsoft.Azure.Management.DataLake.Analytics, Version=3.0.0.0 NuGet package. We will also require Microsoft.Rest. ClientRuntime.Azure.Authentication, Version=2.0.0.0 NuGet package for Access Token retrievals.

Configuration

We need a Service Principal to be able to interact with ADLA instance on Azure. Managed Service Identity (written about it before) can also be used to make it secret less. However, in this example I will use Service Principal to keep it easier to understand. Once we have our Service Principal, we need to configure them in Function Application Settings.

Hacking the function

[FunctionName("FN_ADLA_Job_Retry")]

public static void Run([TimerTrigger("0 0 */2 * * *")]TimerInfo myTimer, TraceWriter log)

{

var accountName = GetEnvironmentVariable("ADLA_NAME");

var tenantId = GetEnvironmentVariable("TENANT_ID");

var clientId = GetEnvironmentVariable("SERVICE_PRINCIPAL_ID");

var clientSecret = GetEnvironmentVariable("SERVICE_PRINCIPAL_SECRET");

 

ProcessFailedJobsAsync(tenantId, clientId, clientSecret, accountName).Wait();

}

That’s our Azure Function scheduled to be run every 2 hours. Once we get a trigger, we retrieve the AD tenant ID, Service Principal ID, secret and the account name of target ADLA.

Next thing we do, write a method that will give us a ADLA REST client – authenticated with Azure AD, ready to make a call to ADLA account.

private static async Task GetAdlaClientAsync(

string clientId, string clientSecret, string tenantId)

{

var creds = new ClientCredential(clientId, clientSecret);

var clientCreds = await ApplicationTokenProvider

.LoginSilentAsync(tenantId, creds);

 

var adlsClient = new DataLakeAnalyticsJobManagementClient(clientCreds);

return adlsClient;

}

The DataLakeAnalyticsJobManagementClient class comes from Microsoft.Azure.Management.DataLake.Analytics, Version=3.0.0.0 NuGet package that we have already installed into our project.

Next, we will write a method that will get us all the failed jobs,

private static async Task<Microsoft.Rest.Azure.IPage>

GetFailedJobsAsync(string accountName, DataLakeAnalyticsJobManagementClient client)

{

// We are ignoring the data pages that has older jobs

// If that's important to you, use CancellationToken to retrieve those pages

return await client.Job

.ListAsync(accountName,

new ODataQuery(job => job.Result == JobResult.Failed));

}

We have now the capability to retrieve failed jobs, great! Now we should write the real logic that will check for failed jobs that never been retried and resubmit them.

private const string RetryJobPrefix = "RETRY-";

public static async Task ProcessFailedJobsAsync(

string tenantId, string clientId, string clientSecret, string accountName)

{

var client = await GetAdlaClientAsync(clientId, clientSecret, tenantId);

 

var failedJobs = await GetFailedJobsAsync(accountName, client);

 

foreach (var failedJob in failedJobs)

{

// If it's a retry attempt we will not kick this off again.

if (failedJob.Name.StartsWith(RetryJobPrefix)) continue;

 

// we will retry this with a name prefixed with a RETRY

var retryJobName = $"{RetryJobPrefix}{failedJob.Name}";

 

// Before we kick this off again, let's check if we already have retried this before..

if (!(await HasRetriedBeforeAsync(accountName, client, retryJobName)))

{

var jobDetails = await client.Job.GetAsync(accountName, failedJob.JobId.Value);

var newJobID = Guid.NewGuid();

 

var properties = new USqlJobProperties(jobDetails.Properties.Script);

var parameters = new JobInformation(

retryJobName,

JobType.USql, properties,

priority: failedJob.Priority,

degreeOfParallelism: failedJob.DegreeOfParallelism,

jobId: newJobID);

 

// resubmit this job now

await client.Job.CreateAsync(accountName, newJobID, parameters);

}

}

}

private async static Task HasRetriedBeforeAsync(string accountName,

DataLakeAnalyticsJobManagementClient client, string name)

{

var jobs = await client.Job

.ListAsync(accountName,

new ODataQuery(job => job.Name == name));

 

return jobs.Any();

}

This is it all!

Final thoughts!

We can’t avoid failures, but we can respond in ways that will keep our system up or at least minimize downtime. In this example, when one Job fails unpredictably, its effects can cause the system to fail.

We should build our own mitigation against these uncertain factors – with automation.

Azure Web App – Removing IP Restrictions

Azure Web App allows us to configure IP Restrictions (same goes for Azure Functions, API apps) . This allows us to define a priority ordered allow/deny list of IP addresses as access rules for our app. The allow list can include IPv4 and IPv6 addresses.

IP restrictions flow

Source: MSDN

Developers often run into scenarios when they want to do programmatic manipulations in these restriction rules. Adding or removing IP restrictions from Portal is easy and documented here. We can also manipulate them with ARM templates, like following:


"ipSecurityRestrictions": [
{
"ipAddress": "131.107.159.0/24",
"action": "Allow",
"tag": "Default",
"priority": 100,
"name": "allowed access"
}
],

However, sometimes it’s handy to do this in Power Shell scripts – that can be executed as a Build/Release task in CI/CD pipeline or other environments – when we can add IP restrictions with some scripts and/or remove some restriction rules. Google finds quite some blog posts that show how to add IP restrictions, but not a lot for removing a restriction.

In this post, I will present a complete Power Shell script that will allows us do the following:

  • Add an IP restriction
  • View the IP restrictions
  • Remove all IP Restrictions

Add-AzureRmWebAppIPRestrictions

function Add-AzureRmWebAppIPRestrictions {
    Param(
        $WebAppName,
        $ResourceGroupName,
        $IPAddress,
        $Mask
    )

    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]
    $WebAppConfig = (Get-AzureRmResource -ResourceType Microsoft.Web/sites/config -ResourceName $WebAppName -ResourceGroupName $ResourceGroupName -ApiVersion $APIVersion)
    $IpSecurityRestrictions = $WebAppNameConfig.Properties.ipsecurityrestrictions

    if ($ipAddress -in $IpSecurityRestrictions.ipAddress) {
        "$IPAddress is already restricted in $WebAppName."
    }
    else {
        $webIP = [PSCustomObject]@{ipAddress = ''; subnetMask = ''; Priority = 300}
        $webIP.ipAddress = $ipAddress
        $webIP.subnetMask = $Mask
        if($null -eq $IpSecurityRestrictions){
            $IpSecurityRestrictions = @()
        }

        [System.Collections.ArrayList]$list = $IpSecurityRestrictions
        $list.Add($webIP) | Out-Null

        $WebAppConfig.properties.ipSecurityRestrictions = $list
        $WebAppConfig | Set-AzureRmResource  -ApiVersion $APIVersion -Force | Out-Null
        Write-Output "New restricted IP address $IPAddress has been added to WebApp $WebAppName"
    }
}

Get-AzureRmWebAppIPRestrictions

function Get-AzureRmWebAppIPRestrictions {
    param
    (
        [string] $WebAppName,
        [string] $ResourceGroupName
    )
    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]

    $WebAppConfig = (Get-AzureRmResource -ResourceType Microsoft.Web/sites/config -ResourceName  $WebAppName -ResourceGroupName $ResourceGroupName -ApiVersion $APIVersion)
    $IpSecurityRestrictions = $WebAppConfig.Properties.ipsecurityrestrictions
    if ($null -eq $IpSecurityRestrictions) {
        Write-Output "$WebAppName has no IP restrictions."
    }
    else {
        Write-Output "$WebAppName IP Restrictions: "
        $IpSecurityRestrictions
    }
}

Remove-AzureRmWebAppIPRestrictions

function  Remove-AzureRmWebAppIPRestrictions {
    param (
        [string]$WebAppName,
        [string]$ResourceGroupName
    )
    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]

    $r = Get-AzureRmResource -ResourceGroupName $ResourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName "$WebAppName/web" -ApiVersion $APIVersion
    $p = $r.Properties
    $p.ipSecurityRestrictions = @()
    Set-AzureRmResource -ResourceGroupName  $ResourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName "$WebAppName/web" -ApiVersion $APIVersion -PropertyObject $p -Force
}
And finally, to test them:
function  Test-Everything {
    if (!(Get-AzureRmContext)) {
        Write-Output "Please login to your Azure account"
        Login-AzureRmAccount
    }

    Get-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"

    Remove-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name" 

    Set-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"  -IPAddress "192.51.100.0/24" -Mask ""

    Get-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"
}

Test-Everything
Thanks for reading!

Deploying Azure web job written in .net core

Lately I have written a .net core web job and wanted to publish it via CD (continuous deployment) from Visual Studio Online. Soon I figured, Azure Web Job SDK doesn’t support (yet) .net core. The work I expected will take 10 mins took about an hour.

If you are also figuring out this, this blog post is what you are looking for.

I will describe the steps and provide a PowerShell script that does the deployment via Kudu API. Kudu is the Source Control management for Azure app services, which has a Zip API that allows us to deploy zipped folder into an Azure app service.

Here are the steps you need to follow. You can start by creating a simple .net core console application. Add a Power Shell file into the project that will do the deployment in your Visual Studio online release pipeline. The Power Shell script will do the following:

  • Publish the project (using dotnet publish)
  • Make a zip out of the artifacts
  • Deploy the zip into the Azure web app

Publishing the project

We will use dotnet publish command to publish our project.

$resourceGroupName = "my-regource-group"
$webAppName = "my-web-job"
$projectName = "WebJob"
$outputRoot = "webjobpublish"
$ouputFolderPath = "webjobpublish\App_Data\Jobs\Continuous\my-web-job"
$zipName = "publishwebjob.zip"

$projectFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $projectName
$outputFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $ouputFolderPath
$outputFolderTopDir = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $outputRoot
$zipPath = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $zipName

if (Test-Path $outputFolder)
  { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}
$fullProjectPath = "$projectFolder\$projectName.csproj"

dotnet publish "$fullProjectPath"
     --configuration release --output $outputFolder

Create a compressed artifact folder

We will use System.IO.Compression.Filesystem assembly to create the zip file.

Add-Type -assembly "System.IO.Compression.Filesystem"
[IO.Compression.Zipfile]::CreateFromDirectory(
        $outputFolderTopDir, $zipPath)

Upload the zip into Azure web app

Next step is to upload the zip file into the Azure web app. This is where we first need to fetch the credentials for the Azure web app and then use the Kudu API to upload the content. Here’s the script:

function Get-PublishingProfileCredentials
         ($resourceGroupName, $webAppName) {

    $resourceType = "Microsoft.Web/sites/config"
    $resourceName = "$webAppName/publishingcredentials"

    $publishingCredentials = Invoke-AzureRmResourceAction `
                 -ResourceGroupName $resourceGroupName `
                 -ResourceType $resourceType `
                 -ResourceName $resourceName `
                 -Action list `
                 -ApiVersion 2015-08-01 `
                 -Force
    return $publishingCredentials
}

function Get-KuduApiAuthorisationHeaderValue
         ($resourceGroupName, $webAppName) {

    $publishingCredentials =
      Get-PublishingProfileCredentials $resourceGroupName $webAppName

    return ("Basic {0}" -f `
        [Convert]::ToBase64String( `
        [Text.Encoding]::ASCII.GetBytes(("{0}:{1}"
           -f $publishingCredentials.Properties.PublishingUserName, `
        $publishingCredentials.Properties.PublishingPassword))))
}

$kuduHeader = Get-KuduApiAuthorisationHeaderValue `
    -resourceGroupName $resourceGroupName `
    -webAppName $webAppName

$Headers = @{
    Authorization = $kuduHeader
}

# use kudu deploy from zip file
Invoke-WebRequest `
    -Uri https://$webAppName.scm.azurewebsites.net/api/zipdeploy `
    -Headers $Headers `
    -InFile $zipPath `
    -ContentType "multipart/form-data" `
    -Method Post

# Clean up the artifacts now
if (Test-Path $outputFolder)
      { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}

PowerShell task in Visual Studio Online

Now we can leverage the Azure PowerShell task in Visual Studio Release pipeline and invoke the script to deploy the web job.

That’s it!

Thanks for reading, and have a nice day!

Zero-Secret application development with Azure Managed Service Identity

Committing the secrets along with application codes to a repository is one of the most commonly made mistakes by many developers. This can get nasty when an application is developed for Cloud deployment. You probably have read the story of checking in AWS S3 secrets to GitHub. The developer corrected the mistake in 5 mins, but still received a hefty invoice because of bots that crawl open source sites, looking for secrets. There are many tools that can scan codes for potential secret leakages, they can be embedded in CI/CD pipeline. These tools do a great job in finding out deliberate or unintentional commits that contains secrets before they get merged to a release/master branch. However, they are not absolutely protecting all potential secrets leaks. Developers still need to be carefully review their codes on every commits.

Azure Managed Service Instance (MSI) can address this problem in a very neat way. MSI has the potential to design application that are secret-less. There is no need to have any secrets (specially secrets for database connection strings, storage keys etc.) at all application codes.

Secret management in application

Let’s recall how we were doing secret management yesterday. Simplicity’s sake, we have a web application that is backed by a SQL server. This means, we almost certainly have a configuration key (SQL Connection String) in our configuration file. If we have storage accounts, we might have the Shared Access Signature (aka SAS token) in our config file.

As we see, we’re adding secrets one after another in our configuration file – in plain text format. We need now, credential scanner tasks in our pipelines, having some local configuration files in place (for local developments) and we need to mitigate the mistakes of checking in secrets to repository.

Azure Key Vault as secret store

Azure Key Vault can simplify these above a lot, and make things much cleaner. We can store the secrets in a Key Vault and in CI/CD pipeline, we can get them from vault and write them in configuration files, just before we publish the application code into the cloud infrastructure. VSTS build and release pipeline have a concept of Library, that can be linked with Key vault secrets, designed just to do that. The configuration file in this case should have some sort of String Placeholders that will be replaced with secrets during CD execution.

The above works great, but you still have a configuration file with all the placeholders for secrets (when you have multiple services that has secrets) – which makes it difficult to manage for local development and cloud developments. An improvement can be keep all the secrets in Key Vault, and let the application load those secrets runtime (during the startup event) directly from the Key vault. This is way easier to manage and also pretty clean solution. The local environment can use a different key vault than production, the configuration logic becomes extremely simpler and the configuration file now have only one secret. That’s a Service Principal secret – which can be used to talk to the key vault during startup.

So we get all the secrets stored in a vault and exactly one secret in our configuration file – nice! But if we accidentally commit this very single secret, all other secrets in vault are also compromised. What we can do to make this more secure? Let’s recap our knowledge about service principals before we draw the solution.

What is Service Principal?

A resource that is secured by Azure AD tenant, can only be accessed by a security principal. A user is granted access to a AD resource on his security principal, known as User Principal. When a service (a piece of software code) wants to access a secure resource, it needs to use a security principal of a Azure AD Application Object. We call them Service Principal. You can think of Service Principals as an instance of an Azure AD Application.applicationA service principal has a secret, often referred as Client Secret. This can be analogous to the password of a user principal. The Service Principal ID (often known as Application ID or Client ID) and Client Secret together can authenticate an application to Azure AD for a secure resource access. In our earlier example, we needed to keep this client secret (the only secret) in our configuration file, to gain access to the Key vault. Client secrets have expiration period that up to the application developers to renew to keep things more secure. In a large solution this can easily turn into a difficult job to keep all the service principal secrets renewed with short expiration time.

Managed Service Identity

Managed Service Identity is explained in Microsoft Documents in details. In layman’s term, MSI literally is a Service Principal, created directly by Azure and it’s client secret is stored and rotated by Azure as well. Therefore it is “managed”. If we create a Azure web app and turn on Manage Service Identity on it (which is just a toggle switch) – Azure will provision an Application Object in AD (Azure Active Directory for the tenant) and create a Service Principal for it and store the client secret somewhere – that we don’t care. This MSI now represents the web application identity in Azure AD.msi

Managed Service Identity can be provisioned in Azure Portal, Azure Power-Shell or Azure CLI as below:

az login
az group create --name myResourceGroup --location westus
az appservice plan create --name myPlan --resource-group myResourceGroup
       --sku S1
az webapp create --name myApp --plan myPlan
       --resource-group myResourceGroup
az webapp identity assign
       --name myApp --resource-group myResourceGroup

Or via Azure Resource Manager Template:

{
"apiVersion": "2016-08-01",
"type": "Microsoft.Web/sites",
"name": "[variables('appName')]",
"location": "[resourceGroup().location]",
"identity": {
"type": "SystemAssigned"
},
"properties": {
"name": "[variables('appName')]",
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"hostingEnvironment": "",
"clientAffinityEnabled": false,
"alwaysOn": true
},
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
]}

Going back to our key vault example, with MSI we can now eliminate the client secret of Service Principal from our application code.

But wait! We used to read keys/secrets from Key vault during the application startup, and we needed that client secret for that. How we are going to talk to Key vault now without the secret?

Using MSI from App service

Azure provides couple of environment variables for app services that has MSI enabled.

  • MSI_ENDPOINT
  • MSI_SECRET

The first one is a URL that our application can make a request to, with the MSI_SECRET as parameter and the response will be a access token that will let us talk to the key vault. This sounds a bit complex, but fortunately we don’t need to do that by hand.

Microsoft.Azure.Services.AppAuthentication  library for .NET wraps these complexities for us and provides an easy API to get the access token returned.

We need to add references to the Microsoft.Azure.Services.AppAuthentication and Microsoft.Azure.KeyVault NuGet packages to our application.

Now we can get the access token to communicate to the key vault in our startup like following:


using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Azure.KeyVault;

// ...

var azureServiceTokenProvider = new AzureServiceTokenProvider();

string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://management.azure.com/");

// OR

var kv = new KeyVaultClient(new KeyVaultClient
.AuthenticationCallback
(azureServiceTokenProvider.KeyVaultTokenCallback));

This is neat, agree? We now have our application configuration file that has no secrets or keys whatsoever. Isn’t it cool?

Step up – activating zero-secret mode

We have managed deploying our web application with zero secret above. However, we still have secrets for SQL database, storage accounts etc. in our key vault, we just don’t have to put them in our configuration files. But they are still there and loaded in startup event of our web application. This is a great improvement, of course. But MSI allows us to take this even better stage.

Azure AD Authentication for Azure Services

To leverage MSI’s full potentials we should use Azure AD authentication (RBAC controls). For instance, we have been using Shared Access Signatures or SQL connection strings to communicate Azure Storage/Service Bus and SQL servers. With AD authentication, we will use a security principal that has a role assignment with Azure RBAC.

Azure gradually enabling AD authentication for resources. As of today (time of writing this blog) the following services/resources supports AD authentication with Managed Service Identity.

Service Resource ID Status Date Assign access
Azure Resource Manager https://management.azure.com/ Available September 2017 Azure portal
PowerShell
Azure CLI
Azure Key Vault https://vault.azure.net Available September 2017
Azure Data Lake https://datalake.azure.net/ Available September 2017
Azure SQL https://database.windows.net/ Available October 2017
Azure Event Hubs https://eventhubs.azure.net Available December 2017
Azure Service Bus https://servicebus.azure.net Available December 2017
Azure Storage https://storage.azure.com/ Preview May 2018

Read more updated info here.

AD authentication finally allows us to completely remove those secrets from Key vaults and directly access to the storage account, Data lake stores, SQL servers with MSI tokens. Let’s see some examples to understand this.

Example: Accessing Storage Queues with MSI

In our earlier example, we talked about the Azure web app, for which we have enabled Managed Service Identity. In this example we will see how we can put a message in Azure Storage Queue using MSI. Assuming our web application name is:

contoso-msi-web-app

Once we have enabled the managed service identity for this web app, Azure provisioned an identity (an AD Application object and a Service Principal for it) with the same name as the web application, i.e. contoso-msi-web-app.

Now we need to set role assignment for this Service Principal so that it can access to the storage account. We can do that in Azure Portal. Go to the Azure Portal IAM blade (the access control page) and add a role for this principal to the storage account. Of course, you can also do that with Power-Shell.

If you are not doing it in Portal, you need to know the ID of the MSI. Here’s how you get that: (in Azure CLI console)


az resource show -n $webApp -g $resourceGroup
--resource-type Microsoft.Web/sites --query identity

You should see an output like following:

{
"principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tenantId": "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx",
"type": null
}

The Principal ID is what you are after. We can now assign roles for this principal as follows:

$exitingRoleDef = Get-AzureRmRoleAssignment `
                -ObjectId `
                -RoleDefinitionName "Contributor"  `
                -ResourceGroupName "RGP NAME"
            If ($exitingRoleDef -eq $null) {
                New-AzureRmRoleAssignment `
                    -ObjectId  `
                    -RoleDefinitionName "Contributor" `
                    -ResourceGroupName "RGP NAME"
            }

You can run these commands in CD pipeline with Azure Inline Power Shell tasks in VSTS release pipelines.

Let’s write a MSI token helper class.

We will use the Token Helper in a Storage Account helper class.

Now, let’s write a message into the Storage Queue.

Isn’t it awesome?

Another example, this time SQL server

As of now, Azure SQL Database does not support creating logins or users from service principals created from Managed Service Identity. Fortunately, we have workaround. We can add the MSI principal an AAD group as member, and then grant access to the group to the database.

We can use the Azure CLI to create the group and add our MSI to it:

az ad group create --display-name sqlusers --mail-nickname 'NotNeeded'az ad group member add -g sqlusers --member-id xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Again, we are using the MSI id as member id parameter here.
Next step, we need to allow this group to access SQL database. PowerShell rescues again:

$query = @"CREATE USER [$adGroupName] FROM EXTERNAL PROVIDER
GO
ALTER ROLE db_owner ADD MEMBER [$adGroupName]
"@
sqlcmd.exe -S "tcp:$sqlServer,1433" `
-N -C -d $database -G -U $sqlAdmin.UserName `
-P $sqlAdmin.GetNetworkCredential().Password `
-Q $query

Let’s write a token helper class for SQL as we did before for storage queue.

We are almost done, now we can run SQL commands from web app like this:

Voila!

Conclusion

Managed Service Identity is awesome and powerful, it really drives application where security of the application are easy to manage over longer period. Specially when you have lots of applications you end up with huge number of service principals. Managing their secrets over time, keeping track of their expiration is a nightmare. Managed Service makes it so beautiful!

 

Thanks for reading!

CQRS and ES on Azure Table Storage

Lately I was playing with Event Sourcing and command query responsibility segregation (aka CQRS) pattern on Azure Table storage. Thought of creating a lightweight library that facilitates writing such applications. I ended up with a Nuget package to do this. here is the GitHub Repository.

A lightweight CQRS supporting library with Event Store based on Azure Table Storage.

Quick start guide

Install

Install the SuperNova.Storage Nuget package into the project.

Install-Package SuperNova.Storage -Version 1.0.0

The dependencies of the package are:

  • .NETCoreApp 2.0
  • Microsoft.Azure.DocumentDB.Core (>= 1.7.1)
  • Microsoft.Extensions.Logging.Debug (>= 2.0.0)
  • SuperNova.Shared (>= 1.0.0)
  • WindowsAzure.Storage (>= 8.5.0)

Implemention guide

Write Side – Event Sourcing

Once the package is installed, we can start sourcing events in an application. For example, let’s start with a canonical example of UserController in a Web API project.

We can use the dependency injection to make EventStore avilable in our controller.

Here’s an example where we register an instance of Event Store with DI framework in our Startup.cs

// Config object encapsulates the table storage connection string
services.AddSingleton(new EventStore( ... provide config ));

Now the controller:

[Produces("application/json")]
[Route("users")]
public class UsersController : Controller
{
public UsersController(IEventStore eventStore)
{
this.eventStore = eventStore; // Here capture the event store handle
}

... other methods skipped here
}

Aggregate

Implementing event sourcing becomes way much handier, when it’s fostered with Domain Driven Design (aka DDD). We are going to assume that we are familiar with DDD concepts (especially Aggregate Roots).

An aggregate is our consistency boundary (read as transactional boundary) in Event Sourcing. (Technically, Aggregate ID’s are our partition keys on Event Store table – therefore, we can only apply an atomic operation on a single aggregate root level.)

Let’s create an Aggregate for our User domain entity:

using SuperNova.Shared.Messaging.Events.Users;
using SuperNova.Shared.Supports;

public class UserAggregate : AggregateRoot
{
private string _userName;
private string _emailAddress;
private Guid _userId;
private bool _blocked;

Once we have the aggregate class written, we should come up with the events that are relevant to this aggregate. We can use Event storming to come up with the relevant events.

Here are the events that we will use for our example scenario:

public class UserAggregate : AggregateRoot
{

... skipped other codes

#region Apply events
private void Apply(UserRegistered e)
{
this._userId = e.AggregateId;
this._userName = e.UserName;
this._emailAddress = e.Email;
}

private void Apply(UserBlocked e)
{
this._blocked = true;
}

private void Apply(UserNameChanged e)
{
this._userName = e.NewName;
}
#endregion

... skipped other codes
}

Now that we have our business events defined, we will define our commands for the aggregate:

public class UserAggregate : AggregateRoot
{
#region Accept commands
public void RegisterNew(string userName, string emailAddress)
{
Ensure.ArgumentNotNullOrWhiteSpace(userName, nameof(userName));
Ensure.ArgumentNotNullOrWhiteSpace(emailAddress, nameof(emailAddress));

ApplyChange(new UserRegistered
{
AggregateId = Guid.NewGuid(),
Email = emailAddress,
UserName = userName
});
}

public void BlockUser(Guid userId)
{
ApplyChange(new UserBlocked
{
AggregateId = userId
});
}

public void RenameUser(Guid userId, string name)
{
Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

ApplyChange(new UserNameChanged
{
AggregateId = userId,
NewName = name
});
}
#endregion


... skipped other codes
}

So far so good!

Now we will modify the web api controller to send the correct command to the aggregate.

public class UserPayload 
{
public string UserName { get; set; }
public string Email { get; set; }
}

// POST: User
[HttpPost]
public async Task Post(Guid projectId, [FromBody]UserPayload user)
{
Ensure.ArgumentNotNull(user, nameof(user));

var userId = Guid.NewGuid();

await eventStore.ExecuteNewAsync(
Tenant, "user_event_stream", userId, async () => {

var aggregate = new UserAggregate();

aggregate.RegisterNew(user.UserName, user.Email);

return await Task.FromResult(aggregate);
});

return new JsonResult(new { id = userId });
}

And another API to modify existing users into the system:

//PUT: User
[HttpPut("{userId}")]
public async Task Put(Guid projectId, Guid userId, [FromBody]string name)
{
Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

await eventStore.ExecuteEditAsync(
Tenant, "user_event_stream", userId,
async (aggregate) =>
{
aggregate.RenameUser(userId, name);

await Task.CompletedTask;
}).ConfigureAwait(false);

return new JsonResult(new { id = userId });
}

That’s it! We have our WRITE side completed. The event store is now contains the events for user event stream.

EventStore

Read Side – Materialized Views

We can consume the events in a seperate console worker process and generate the materialized views for READ side.

The readers (the console application – Azure Web Worker for instance) are like feed processor and have their own lease collection that makes them fault tolerant and resilient. If crashes, it catches up form the last event version that was materialized successfully. It’s doing a polling – instead of a message broker (Service Bus for instance) on purpose, to speed up and avoid latencies during event propagation. Scalabilities are ensured by means of dedicating lease per tenants and event streams – which provides pretty high scalability.

How to listen for events?

In a worker application (typically a console application) we will listen for events:

private static async Task Run()
{
var eventConsumer = new EventStreamConsumer(
... skipped for simplicity
"user-event-stream",
"user-event-stream-lease");

await eventConsumer.RunAndBlock((evts) =>
{
foreach (var @evt in evts)
{
if (evt is UserRegistered userAddedEvent)
{
readModel.AddUserAsync(new UserDto
{
UserId = userAddedEvent.AggregateId,
Name = userAddedEvent.UserName,
Email = userAddedEvent.Email
}, evt.Version);
}

else if (evt is UserNameChanged userChangedEvent)
{
readModel.UpdateUserAsync(new UserDto
{
UserId = userChangedEvent.AggregateId,
Name = userChangedEvent.NewName
}, evt.Version);
}
}

}, CancellationToken.None);
}

static void Main(string[] args)
{
Run().Wait();
}

Now we have a document collection (we are using Cosmos Document DB in this example for materialization but it could be any database essentially) that is being updated as we store events in event stream.

Conclusion

The library is very light weight and havily influenced by Greg’s event store model and aggreagate model. Feel free to use/contribute.

Thank you!