Continuously deliver changes to Azure API management service with Git Configuration Repository

What is API management

Publishing data, insights and business capabilities via API in a unified way can be challenging at times. Azure API management (APIM) makes it simpler than ever.

Businesses everywhere are looking to extend their operations as a digital platform, creating new channels, finding new customers and driving deeper engagement with existing ones. API Management provides the core competencies to ensure a successful API program through developer engagement, business insights, analytics, security, and protection. You can use Azure API Management to take any backend and launch a full-fledged API program based on it. [Source]

The challenge – Continuous Deployment

These days, it’s very common to have many distributed services (let’s say Micro service) publish APIs in a mesh up Azure API management portal. For instance, Order and Invoice APIs are published over an E-Commerce API portal, although they are backed by isolated Order and Invoice Micro services. Autonomous teams build these APIs, often work in isolation’s but their API specifications (mostly Open API specification Swagger documents) must be published through a shared API management Service. Different teams with different release cadence can make the continuous deployment of API portal challenging and error prone.

Azure API management ships bunch of Power Shell cmdlets (i.e. Import-AzureRmApiManagementApi  and Publish-AzureRmApiManagementTenantGitConfiguration ) that allow deploying the API documentation directly to APIM. Which works great for single API development team. It gets a bit trickier when multiple teams are pushing changes to a specific APIM instance like the example above. Every team needs to have deployment credentials in their own release pipelines – which might undesirable for a Shared APIM instance. Centrally governing these changes becomes difficult.

APIM Configuration Git Repository

APIM instance has a pretty neat feature. Each APIM instance has a configuration database associated as a Git Repository, containing the metadata and configuration information for the APIM instance. We can clone the configuration repository and push changes back- using our very familiar Git commands and tool sets and APIM allows us to publish those changes that are pushed – sweet!

This allows us downloading different versions of our APIM configuration state. Managing bulk APIM configurations (this includes, API specifications, Products, Groups, Policies and branding styles etc.) in one central repository with very familiar Git tools, is super convenient.

The following diagram shows an overview of the different ways to configure your API Management service instance.

api-management-git-configure

[Source]

This sounds great! However, we will leverage this capability and make it even nicer, where multiple teams can develop their API’s without depending on others release schedules and we can have a central release pipeline that publishes the changes from multiple API services.

Solution design

The idea is pretty straight forward. Each team develop their owner API specification and when they want to release, they create PR (Pull Request) to a shared Repository. Which contains the APIM configuration clone. Once peer reviewed the PR and merged, the release pipeline kicks in. Which deploys the changes to Azure APIM.

The workflow looks like following:

workflow
Development and deployment workflow

Building the solution

We will provision a APIM instance on Azure. We can do that with an ARM template (We will not go into the details of that, you can use this GitHub template ).

Once we have APIM provisioned, we can see the Git Repository is not yet synchronized with the Configuration Database. (notice Out  of sync in the following image)

Out of sync

We will sync it and clone a copy of the configuration database in our local machine using the following Power Shell script. (You need to run Login-AzureRMAccount in Power Shell console, if you are not already logged in to Azure).

$context = New-AzureRmApiManagementContext `
        -ResourceGroupName $ResourceGroup `
        -ServiceName $ServiceName
    Write-Output "Initializing context...Completed"

    Write-Output "Syncing Git Repo with current API management state..."
    Save-AzureRmApiManagementTenantGitConfiguration `
        -Context $context `
        -Branch 'master' `
        -PassThru -Force

This will make the Git Repository synced.

Sync

To clone the repository to local machine, we need to generate Git Credentials first. Let’s do that now:

Function ExecuteGitCommand {
    param
    (
        [System.Object[]]$gitCommandArguments
    )

    $gitExePath = "C:\Program Files\git\bin\git.exe"
    & $gitExePath $gitCommandArguments
}

 

$expiry = (Get-Date) + '1:00:00'
    $parameters = @{
        "keyType" = "primary"
        "expiry"  = ('{0:yyyy-MM-ddTHH:mm:ss.000Z}' -f $expiry)
    }

    $resourceId = '/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.ApiManagement/service/{2}/users/git' -f $SubscriptionId, $ResourceGroup, $ServiceName

    if ((Test-Path -Path $TempDirectory )) {
        Remove-Item $TempDirectory -Force -Recurse -ErrorAction "Stop"
    }

    $gitRemoteSrcPath = Join-Path -Path $TempDirectory -ChildPath 'remote-api-src'

    Write-Output "Retrieving Git Credentials..."
    $gitUsername = 'apim'
    $gitPassword = (Invoke-AzureRmResourceAction `
            -Action 'token' `
            -ResourceId $resourceId `
            -Parameters $parameters `
            -ApiVersion '2016-10-10' `
            -Force).Value
    $escapedGitPassword = [System.Uri]::EscapeDataString($gitPassword)
    Write-Output "Retrieving Git Credentials...Completed"

    $gitRepositoryUrl = 'https://{0}:{1}@{2}.scm.azure-api.net/' -f $gitUsername, $escapedGitPassword, $ServiceName
    ExecuteGitCommand -gitCommandArguments @("clone", "$gitRepositoryUrl", "$gitRemoteSrcPath")

Now, we have a copy of the Git in our local machine. This is just a mirror of our APIM configuration database. We will create a repository in our Source Control (I am using VSTS). This will be our Shared APIM source repository. Every team will issue Pull Request with their API Specification into this repository. Which can be approved by other peers and eventually merged to master branch.

Building the release pipeline

Time to deploy changes from our Shared Repository to APIM instance. We will require following steps to perform:

  1. Sync the configuration database to APIM Git Repository.
  2. Clone the latest changes to our Build agent.
  3. Copy all updated API specifications, approved and merged to our VSTS repository’s master branch to the cloned repository.
  4. Commit all changes to the cloned repository.
  5. Push changes from clone repository to origin.
  6. Publish changes from Git Repository to APIM instance.

I have compiled a single Power Shell script that does all these steps- in that order. Idea is to, use this Power Shell script in our release pipeline to deploy releases to APIM. The complete scripts is given below:

Final thoughts

The Git Repository model for deploying API specifications to a single APIM instance makes it extremely easy to manage. Despite the fact, we could have done this with Power Shell alone. But in multiple team scenario that gets messy pretty quick. Having a centrally leading Git Repository as release gateway (and the only way to make any changes to APIM instance) reduces the complexity to minimum.

Resilient Azure Data Lake Analytics (ADLA) Jobs with Azure Functions

Azure Data Lake Analytics is an on-demand analytics job service that allows writing queries to transform data and grab insights efficiently. The analytics service can handle jobs of any scale instantly by setting the dial for how much power you need.

JObs

In many organizations, these jobs could play a crucial role and reliability of these job executions could be business critical. Lately I have encountered a scenario where a particular USQL job has failed with following error message:

Usql – Job failed due to internal system error – NM_CANNOT_LAUNCH_JM

A bit of research on Google revealed, it’s a system error, which doesn’t leave a lot of diagnostic clue to reason out. Retrying this job manually (by button clicking on portal) yielded success! Which makes it a bit unpredictable and uncertain. However, uncertainty like this is sort of norm while developing Software for Cloud. We all read/heard about Chaos Monkeys of Netflix.

What is resiliency?

Resiliency is the capability to handle partial failures while continuing to execute and not crash. In modern application architectures — whether it be micro services running in containers on-premises or applications running in the cloud — failures are going to occur. For example, applications that communicate over networks (like services talking to a database or an API) are subject to transient failures. These temporary faults cause lesser amounts of downtime due to timeouts, overloaded resources, networking hiccups, and other problems that come and go and are hard to reproduce. These failures are usually self-correcting. (Source)
Today I will present an approach that mitigated this abrupt job failure.

The Solution Design

Basically, I wanted to have a job progress watcher, waiting to see a failed job and then resubmit that job as a retry-logic. Also, don’t want to retry more than once, which has potential to repeat a forever-failure loop. I can have my watcher running at a frequency – like every 5 minutes or so.

Azure Functions

Azure Functions continuously impressing me for its lightweight built and consumption-based pricing model. Functions can run with different triggers, among them time schedule trigger- that perfectly fits my purpose.

Prerequisites

The function app needs to retrieve failed ADLA jobs and resubmit them as needed. This can be achieved with the Microsoft.Azure.Management.DataLake.Analytics, Version=3.0.0.0 NuGet package. We will also require Microsoft.Rest. ClientRuntime.Azure.Authentication, Version=2.0.0.0 NuGet package for Access Token retrievals.

Configuration

We need a Service Principal to be able to interact with ADLA instance on Azure. Managed Service Identity (written about it before) can also be used to make it secret less. However, in this example I will use Service Principal to keep it easier to understand. Once we have our Service Principal, we need to configure them in Function Application Settings.

Hacking the function

[FunctionName("FN_ADLA_Job_Retry")]

public static void Run([TimerTrigger("0 0 */2 * * *")]TimerInfo myTimer, TraceWriter log)

{

var accountName = GetEnvironmentVariable("ADLA_NAME");

var tenantId = GetEnvironmentVariable("TENANT_ID");

var clientId = GetEnvironmentVariable("SERVICE_PRINCIPAL_ID");

var clientSecret = GetEnvironmentVariable("SERVICE_PRINCIPAL_SECRET");

 

ProcessFailedJobsAsync(tenantId, clientId, clientSecret, accountName).Wait();

}

That’s our Azure Function scheduled to be run every 2 hours. Once we get a trigger, we retrieve the AD tenant ID, Service Principal ID, secret and the account name of target ADLA.

Next thing we do, write a method that will give us a ADLA REST client – authenticated with Azure AD, ready to make a call to ADLA account.

private static async Task GetAdlaClientAsync(

string clientId, string clientSecret, string tenantId)

{

var creds = new ClientCredential(clientId, clientSecret);

var clientCreds = await ApplicationTokenProvider

.LoginSilentAsync(tenantId, creds);

 

var adlsClient = new DataLakeAnalyticsJobManagementClient(clientCreds);

return adlsClient;

}

The DataLakeAnalyticsJobManagementClient class comes from Microsoft.Azure.Management.DataLake.Analytics, Version=3.0.0.0 NuGet package that we have already installed into our project.

Next, we will write a method that will get us all the failed jobs,

private static async Task<Microsoft.Rest.Azure.IPage>

GetFailedJobsAsync(string accountName, DataLakeAnalyticsJobManagementClient client)

{

// We are ignoring the data pages that has older jobs

// If that's important to you, use CancellationToken to retrieve those pages

return await client.Job

.ListAsync(accountName,

new ODataQuery(job => job.Result == JobResult.Failed));

}

We have now the capability to retrieve failed jobs, great! Now we should write the real logic that will check for failed jobs that never been retried and resubmit them.

private const string RetryJobPrefix = "RETRY-";

public static async Task ProcessFailedJobsAsync(

string tenantId, string clientId, string clientSecret, string accountName)

{

var client = await GetAdlaClientAsync(clientId, clientSecret, tenantId);

 

var failedJobs = await GetFailedJobsAsync(accountName, client);

 

foreach (var failedJob in failedJobs)

{

// If it's a retry attempt we will not kick this off again.

if (failedJob.Name.StartsWith(RetryJobPrefix)) continue;

 

// we will retry this with a name prefixed with a RETRY

var retryJobName = $"{RetryJobPrefix}{failedJob.Name}";

 

// Before we kick this off again, let's check if we already have retried this before..

if (!(await HasRetriedBeforeAsync(accountName, client, retryJobName)))

{

var jobDetails = await client.Job.GetAsync(accountName, failedJob.JobId.Value);

var newJobID = Guid.NewGuid();

 

var properties = new USqlJobProperties(jobDetails.Properties.Script);

var parameters = new JobInformation(

retryJobName,

JobType.USql, properties,

priority: failedJob.Priority,

degreeOfParallelism: failedJob.DegreeOfParallelism,

jobId: newJobID);

 

// resubmit this job now

await client.Job.CreateAsync(accountName, newJobID, parameters);

}

}

}

private async static Task HasRetriedBeforeAsync(string accountName,

DataLakeAnalyticsJobManagementClient client, string name)

{

var jobs = await client.Job

.ListAsync(accountName,

new ODataQuery(job => job.Name == name));

 

return jobs.Any();

}

This is it all!

Final thoughts!

We can’t avoid failures, but we can respond in ways that will keep our system up or at least minimize downtime. In this example, when one Job fails unpredictably, its effects can cause the system to fail.

We should build our own mitigation against these uncertain factors – with automation.