Key Vault as backing store of Azure Functions

If you have used Azure function, you probably are aware that Azure Functions leverages a Storage Account underneath to support the file storage (where the function app code resides as Azure File share) and also as a backing store to keep Functions Keys (the secrets that are used in Function invocations).

Containers

Figure: Storage Account containers – “azure-webjobs-secrets”

If you look inside the container there are files with following contents:

secrets-in-storages

Figure: These JSON files has the function keys

host-json

Figure: Encrypted master keys  and other function keys

I have been in a conversation where; it was not appreciated to see the keys stored in the storage account. The security and governance team was seeking for a better place to keep these keys. Where secrets can be further restricted from developer access.

Of course, we can create a VNET around the storage accountand use private link but that has some other consequence as the content (functions implementations artifacts) stored also into the storage account. Configuring two separate storage account can address this better, however, this can make the setup complicated than it has to be.
A better option could be to store this keys into a Key Vault as backing store – which is a great feature of Azure functions, but I’ve found few people are aware of this due to lack of documentations. In this article I will show you how to move these secrets to a Key Vault.

To do so, we need to configure few Application Settings into the Function App. They are given below:

App Settings name Value
AzureWebJobsSecretStorageType keyvault
AzureWebJobsSecretStorageKeyVaultName <Key Vault Name>
AzureWebJobsSecretStorageKeyVaultConnectionString <Connection String or Leave it empty with Managed Identity configured on Azure Functions>

Once you have configured the above settings, you need to enable Managed Identity on your Azure Function. You will have to accomplish that in Identity section under platform features tab. That is a much better option in my opinion as we don’t need to maintain any more secrets to talk to Key vault securely. Go ahead and turn the system identity toggle on. This will create a service principal with the same name as Azure Function application you have.

managedidentity

Figure: Enabling system assigned managed identity on Function app
Next step is to add a rule to the key vault’s access policies for the service principal created in earlier step.

access policyu

Figure: Key vault Access policy
That’s it, hit your function app now and you will see the keys are stored inside the Key vault. You can safely delete the container from the storage account now.

secretsinkeyvault

Figure: Secrets are stored in Key Vault

Hope this will save time when you are concerned to keep the keys in storage account.
The Azure Function is open sourced and is in GitHub. You can have a look into the sources and see other interesting ideas that you may play with.

Access Control management via REST API – Azure Data Lake Gen 2

Background

A while ago, I have built an web-based self-service portal that facilitated multiple teams in the organisation, setting up their Access Control (ACLs) for corresponding data lake folders.

The portal application was targeting Azure Data Lake Gen 1. Recently I wanted to achieve the same but on Azure Data Lake Gen 2. At the time of writing this post, there’s no official NuGet package for ACL management targeting Data Lake Gen 2. One must rely on REST API only.

Read about known issues and limitations of Azure Data Lake Storage Gen 2

Further more, the REST API documentations do not provide example snippets like many other Azure resources. Therefore, it takes time to demystify the REST APIs to manipulate ACLs. Good new is, I have done that for you and will share a straight-forward C# class that wraps the details and issues correct REST API calls to a Data Lake Store Gen 2.

About Azure Data Lake Store Gen 2

Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics. Data Lake Storage Gen2 is significantly different from it’s earlier version known as Azure Data Lake Storage Gen1, Gen2 is entirely built on Azure Blob storage.

Data Lake Storage Gen2 is the result of converging the capabilities of two existing Azure storage services, Azure Blob storage and Azure Data Lake Storage Gen1. Gen1 Features such as file system semantics, directory, and file level security and scale are combined with low-cost, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage.

Let’s get started!

Create a Service Principal

First we would need a service principal. We will use this principal to authenticate to Azure Active Directory (using OAuth 2.0 protocol) in order to authorize our REST calls. We will use Azure CLI to do that.

az ad sp create-for-rbac --name ServicePrincipalName
Add required permissions

Now you need to grant permission for your application to access Azure Storage.

  • Click on the application Settings
  • Click on Required permissions
  • Click on Add
  • Click Select API
  • Filter on Azure Storage
  • Click on Azure Storage
  • Click Select
  • Click the checkbox next to Access Azure Storage
  • Click Select
  • Click Done

App

Now we have Client ID, Client Secret and Tenant ID (take it from the Properties tab of Azure Active Directory – listed as Directory ID).

Access Token from Azure Active Directory

Let’s write some C# code to get an Access Token from Azure Active Directory:

public class TokenProvider
{
private readonly string tenantId;
private readonly string clientId;
private readonly string secret;
private readonly string scopeUri;
private const string IdentityEndpoint = "https://login.microsoftonline.com";
private const string DEFAULT_SCOPE = "https://management.azure.com/";
private const string MEDIATYPE = "application/x-www-form-urlencoded";
public OAuthTokenProvider(string tenantId, string clientId, string secret, string scopeUri = DEFAULT_SCOPE)
{
this.tenantId = tenantId;
this.clientId = WebUtility.UrlEncode(clientId);
this.secret = WebUtility.UrlEncode(secret);
this.scopeUri = WebUtility.UrlEncode(scopeUri);
}
public async Task<Token> GetAccessTokenV2EndpointAsync()
{
var url = $"{IdentityEndpoint}/{this.tenantId}/oauth2/v2.0/token";
var Http = Statics.Http;
Http.DefaultRequestHeaders.Accept.Clear();
Http.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(MEDIATYPE));
var body = $"grant_type=client_credentials&client_id={clientId}&client_secret={secret}&scope={scopeUri}";
var response = await Http.PostAsync(url, new StringContent(body, Encoding.UTF8, MEDIATYPE));
if (response.IsSuccessStatusCode)
{
var tokenResponse = await response.Content.ReadAsStringAsync();
return JsonConvert.DeserializeObject<Token>(tokenResponse);
}
return default(Token);
}
public class Token
{
public string access_token { get; set; }
public string token_type { get; set; }
public int expires_in { get; set; }
public int ext_expires_in { get; set; }
}
}

view raw
token-provider.cs
hosted with ❤ by GitHub

Creating ADLS Gen 2 REST client

Once we have the token provider, we can jump in implementing the REST client for Azure Data Lake.

public class FileSystemApi
{
private readonly string storageAccountName;
private readonly OAuthTokenProvider tokenProvider;
private readonly Uri baseUri;
private const string ACK_HEADER_NAME = "x-ms-acl";
private const string API_VERSION_HEADER_NAME = "x-ms-version";
private const string API_VERSION_HEADER_VALUE = "2018-11-09";
private int Timeout = 100;
public FileSystemApi(string storageAccountName, OAuthTokenProvider tokenProvider)
{
this.storageAccountName = storageAccountName;
this.tokenProvider = tokenProvider;
this.baseUri = new Uri($"https://{this.storageAccountName}.dfs.core.windows.net");
}

view raw
file-system.cs
hosted with ❤ by GitHub

Data Lake  ACLs and POSIX permissions

The security model for Data Lake Gen2 supports ACL and POSIX permissions along with some extra granularity specific to Data Lake Storage Gen2. Settings may be configured through Storage Explorer or through frameworks like Hive and Spark. We will do that via REST API in this post.

There are two kinds of access control lists (ACLs), Access ACLs and Default ACLs.

  • Access ACLs: These control access to an object. Files and folders both have Access ACLs.
  • Default ACLs: A “template” of ACLs associated with a folder that determine the Access ACLs for any child items that are created under that folder. Files do not have Default ACLs.

Here’s the table of allowed grant types:

acl1

While we define ACLs we need to use a short form of these grant types. Microsoft Document explained these short form in below table:

posix

However, in our code we would also simplify the POSIX ACL notations by using some supporting classes as below. That way REST client consumers do not need to spend time building the short form of their aimed grant criteria’s.

public enum AclType
{
User,
Group,
Other,
Mask
}
public enum AclScope
{
Access,
Default
}
[FlagsAttribute]
public enum GrantType : short
{
None = 0,
Read = 1,
Write = 2,
Execute = 4
};
public class AclEntry
{
public AclEntry(AclScope scope, AclType type, string upnOrObjectId, GrantType grant)
{
Scope = scope;
AclType = type;
UpnOrObjectId = upnOrObjectId;
Grant = grant;
}
public AclScope Scope { get; private set; }
public AclType AclType { get; private set; }
public string UpnOrObjectId { get; private set; }
public GrantType Grant { get; private set; }
public string GetGrantPosixFormat()
{
return $"{(this.Grant.HasFlag(GrantType.Read) ? 'r' : '-')}{(this.Grant.HasFlag(GrantType.Write) ? 'w' : '-')}{(this.Grant.HasFlag(GrantType.Execute) ? 'x' : '-')}";
}
public override string ToString()
{
return $"{(this.Scope == AclScope.Default ? "default:" : string.Empty)}{this.AclType.ToString().ToLowerInvariant()}:{this.UpnOrObjectId}:{GetGrantPosixFormat()}";
}
}

view raw
acl-supports.cs
hosted with ❤ by GitHub

Now we can create methods to perform different REST calls, let’s start by creating a file system.

public async Task<bool> CreateFileSystemAsync(
string fileSystemName)
{
var tokenInfo = await tokenProvider.GetAccessTokenV2EndpointAsync();
var jsonContent = new StringContent(string.Empty);
var headers = Statics.Http.DefaultRequestHeaders;
headers.Clear();
headers.Add("Authorization", $"Bearer {tokenInfo.access_token}");
headers.Add(API_VERSION_HEADER_NAME, API_VERSION_HEADER_VALUE);
var response = await Statics.Http.PutAsync($"{baseUri}{WebUtility.UrlEncode(fileSystemName)}?resource=filesystem", jsonContent);
return response.IsSuccessStatusCode;
}

Here we are retrieving a Access Token and then issuing a REST call to Azure Data Lake Storage Gen 2 API to create a new file system. Next, we will create a folder and file in it and then set some Access Control to them.

Let’s create the folder:

public async Task<bool> CreateDirectoryAsync(string fileSystemName, string fullPath)
{
var tokenInfo = await tokenProvider.GetAccessTokenV2EndpointAsync();
var jsonContent = new StringContent(string.Empty);
var headers = Statics.Http.DefaultRequestHeaders;
headers.Clear();
headers.Add("Authorization", $"Bearer {tokenInfo.access_token}");
headers.Add(API_VERSION_HEADER_NAME, API_VERSION_HEADER_VALUE);
var response = await Statics.Http.PutAsync($"{baseUri}{WebUtility.UrlEncode(fileSystemName)}{fullPath}?resource=directory", jsonContent);
return response.IsSuccessStatusCode;
}

view raw
CreateDirectory.cs
hosted with ❤ by GitHub

And creating file in it. Now, file creation (ingestion in Data Lake) is not that straight forward, at least, one can’t do that by a single call. We would have to first create an empty file, then we can write some content in it. We can also append content to an existing file. Finally, we would require to flush the buffer so the new content gets persisted.

Let’s do that, first we will see how to create an empty file:

public async Task<bool> CreateEmptyFileAsync(string fileSystemName, string path, string fileName)
{
var tokenInfo = await tokenProvider.GetAccessTokenV2EndpointAsync();
var jsonContent = new StringContent(string.Empty);
var headers = Statics.Http.DefaultRequestHeaders;
headers.Clear();
headers.Add("Authorization", $"Bearer {tokenInfo.access_token}");
headers.Add(API_VERSION_HEADER_NAME, API_VERSION_HEADER_VALUE);
var response = await Statics.Http.PutAsync($"{baseUri}{WebUtility.UrlEncode(fileSystemName)}{path}{fileName}?resource=file", jsonContent);
return response.IsSuccessStatusCode;
}

view raw
CreateEmptyFile.cs
hosted with ❤ by GitHub

The above snippet will create an empty file, now we will read all content from a local file (from PC) and write them into the empty file in Azure Data Lake that we just created.

public async Task<bool> CreateFileAsync(string filesystem, string path,
string fileName, Stream stream)
{
var operationResult = await this.CreateEmptyFileAsync(filesystem, path, fileName);
if (operationResult)
{
var tokenInfo = await tokenProvider.GetAccessTokenV2EndpointAsync();
var headers = Statics.Http.DefaultRequestHeaders;
headers.Clear();
headers.Add("Authorization", $"Bearer {tokenInfo.access_token}");
headers.Add(API_VERSION_HEADER_NAME, API_VERSION_HEADER_VALUE);
using (var streamContent = new StreamContent(stream))
{
var resourceUrl = $"{baseUri}{filesystem}{path}{fileName}?action=append&timeout={this.Timeout}&position=0";
var msg = new HttpRequestMessage(new HttpMethod("PATCH"), resourceUrl);
msg.Content = streamContent;
var response = await Statics.Http.SendAsync(msg);
//flush the buffer to commit the file
var flushUrl = $"{baseUri}{filesystem}{path}{fileName}?action=flush&timeout={this.Timeout}&position={msg.Content.Headers.ContentLength}";
var flushMsg = new HttpRequestMessage(new HttpMethod("PATCH"), flushUrl);
response = await Statics.Http.SendAsync(flushMsg);
return response.IsSuccessStatusCode;
}
}
return false;
}

view raw
CreateFile.cs
hosted with ❤ by GitHub

Right! Now time to set Access control to the directory or files inside a directory. Here’s the method that we will use to do that.

public async Task<bool> SetAccessControlAsync(string fileSystemName, string path, AclEntry[] acls)
{
var targetPath = $"{WebUtility.UrlEncode(fileSystemName)}{path}";
var tokenInfo = await tokenProvider.GetAccessTokenV2EndpointAsync();
var jsonContent = new StringContent(string.Empty);
var headers = Statics.Http.DefaultRequestHeaders;
headers.Clear();
headers.Add("Authorization", $"Bearer {tokenInfo.access_token}");
headers.Add(API_VERSION_HEADER_NAME, API_VERSION_HEADER_VALUE);
headers.Add(ACK_HEADER_NAME, string.Join(',', acls.Select(a => a.ToString()).ToArray()));
var response = await Statics.Http.PatchAsync($"{baseUri}{targetPath}?action=setAccessControl", jsonContent);
return response.IsSuccessStatusCode;
}

view raw
SetAcl.cs
hosted with ❤ by GitHub

The entire File system REST API class can be found here. Here’s an example how we can use this methods from a console application.

var tokenProvider = new OAuthTokenProvider(tenantId, clientId, secret, scope);
var hdfs = new FileSystemApi(storageAccountName, tokenProvider);
var response = hdfs.CreateFileSystemAsync(fileSystemName).Result;
hdfs.CreateDirectoryAsync(fileSystemName, "/demo").Wait();
hdfs.CreateEmptyFileAsync(fileSystemName, "/demo/", "example.txt").Wait();
var stream = new FileStream(@"C:\temp.txt", FileMode.Open, FileAccess.Read);
hdfs.CreateFileAsync(fileSystemName, "/demo/", "mytest.txt", stream).Wait();
var acls = new AclEntry[]
{
new AclEntry(
AclScope.Access,
AclType.Group,
"2dec2374-3c51-4743-b247-ad6f80ce4f0b",
(GrantType.Read | GrantType.Execute)),
new AclEntry(
AclScope.Access,
AclType.Group,
"62049695-0418-428e-a5e4-64600d6d68d8",
(GrantType.Read | GrantType.Write | GrantType.Execute)),
new AclEntry(
AclScope.Default,
AclType.Group,
"62049695-0418-428e-a5e4-64600d6d68d8",
(GrantType.Read | GrantType.Write | GrantType.Execute))
};
hdfs.SetAccessControlAsync(fileSystemName, "/", acls).Wait();

view raw
Console.cs
hosted with ❤ by GitHub

Conclusion

Until, there’s an Official Client Package released, if you’re into Azure Data Lake Store Gen 2 and wondering how to accomplish these REST calls – I hope this post helped you to move further!

Thanks for reading.

 

Zero-Secret application development with Azure Managed Service Identity

Committing the secrets along with application codes to a repository is one of the most commonly made mistakes by many developers. This can get nasty when an application is developed for Cloud deployment. You probably have read the story of checking in AWS S3 secrets to GitHub. The developer corrected the mistake in 5 mins, but still received a hefty invoice because of bots that crawl open source sites, looking for secrets. There are many tools that can scan codes for potential secret leakages, they can be embedded in CI/CD pipeline. These tools do a great job in finding out deliberate or unintentional commits that contains secrets before they get merged to a release/master branch. However, they are not absolutely protecting all potential secrets leaks. Developers still need to be carefully review their codes on every commits.

Azure Managed Service Instance (MSI) can address this problem in a very neat way. MSI has the potential to design application that are secret-less. There is no need to have any secrets (specially secrets for database connection strings, storage keys etc.) at all application codes.

Secret management in application

Let’s recall how we were doing secret management yesterday. Simplicity’s sake, we have a web application that is backed by a SQL server. This means, we almost certainly have a configuration key (SQL Connection String) in our configuration file. If we have storage accounts, we might have the Shared Access Signature (aka SAS token) in our config file.

As we see, we’re adding secrets one after another in our configuration file – in plain text format. We need now, credential scanner tasks in our pipelines, having some local configuration files in place (for local developments) and we need to mitigate the mistakes of checking in secrets to repository.

Azure Key Vault as secret store

Azure Key Vault can simplify these above a lot, and make things much cleaner. We can store the secrets in a Key Vault and in CI/CD pipeline, we can get them from vault and write them in configuration files, just before we publish the application code into the cloud infrastructure. VSTS build and release pipeline have a concept of Library, that can be linked with Key vault secrets, designed just to do that. The configuration file in this case should have some sort of String Placeholders that will be replaced with secrets during CD execution.

The above works great, but you still have a configuration file with all the placeholders for secrets (when you have multiple services that has secrets) – which makes it difficult to manage for local development and cloud developments. An improvement can be keep all the secrets in Key Vault, and let the application load those secrets runtime (during the startup event) directly from the Key vault. This is way easier to manage and also pretty clean solution. The local environment can use a different key vault than production, the configuration logic becomes extremely simpler and the configuration file now have only one secret. That’s a Service Principal secret – which can be used to talk to the key vault during startup.

So we get all the secrets stored in a vault and exactly one secret in our configuration file – nice! But if we accidentally commit this very single secret, all other secrets in vault are also compromised. What we can do to make this more secure? Let’s recap our knowledge about service principals before we draw the solution.

What is Service Principal?

A resource that is secured by Azure AD tenant, can only be accessed by a security principal. A user is granted access to a AD resource on his security principal, known as User Principal. When a service (a piece of software code) wants to access a secure resource, it needs to use a security principal of a Azure AD Application Object. We call them Service Principal. You can think of Service Principals as an instance of an Azure AD Application.applicationA service principal has a secret, often referred as Client Secret. This can be analogous to the password of a user principal. The Service Principal ID (often known as Application ID or Client ID) and Client Secret together can authenticate an application to Azure AD for a secure resource access. In our earlier example, we needed to keep this client secret (the only secret) in our configuration file, to gain access to the Key vault. Client secrets have expiration period that up to the application developers to renew to keep things more secure. In a large solution this can easily turn into a difficult job to keep all the service principal secrets renewed with short expiration time.

Managed Service Identity

Managed Service Identity is explained in Microsoft Documents in details. In layman’s term, MSI literally is a Service Principal, created directly by Azure and it’s client secret is stored and rotated by Azure as well. Therefore it is “managed”. If we create a Azure web app and turn on Manage Service Identity on it (which is just a toggle switch) – Azure will provision an Application Object in AD (Azure Active Directory for the tenant) and create a Service Principal for it and store the client secret somewhere – that we don’t care. This MSI now represents the web application identity in Azure AD.msi

Managed Service Identity can be provisioned in Azure Portal, Azure Power-Shell or Azure CLI as below:

az login
az group create --name myResourceGroup --location westus
az appservice plan create --name myPlan --resource-group myResourceGroup
       --sku S1
az webapp create --name myApp --plan myPlan
       --resource-group myResourceGroup
az webapp identity assign
       --name myApp --resource-group myResourceGroup

Or via Azure Resource Manager Template:

{
"apiVersion": "2016-08-01",
"type": "Microsoft.Web/sites",
"name": "[variables('appName')]",
"location": "[resourceGroup().location]",
"identity": {
"type": "SystemAssigned"
},
"properties": {
"name": "[variables('appName')]",
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"hostingEnvironment": "",
"clientAffinityEnabled": false,
"alwaysOn": true
},
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
]}

Going back to our key vault example, with MSI we can now eliminate the client secret of Service Principal from our application code.

But wait! We used to read keys/secrets from Key vault during the application startup, and we needed that client secret for that. How we are going to talk to Key vault now without the secret?

Using MSI from App service

Azure provides couple of environment variables for app services that has MSI enabled.

  • MSI_ENDPOINT
  • MSI_SECRET

The first one is a URL that our application can make a request to, with the MSI_SECRET as parameter and the response will be a access token that will let us talk to the key vault. This sounds a bit complex, but fortunately we don’t need to do that by hand.

Microsoft.Azure.Services.AppAuthentication  library for .NET wraps these complexities for us and provides an easy API to get the access token returned.

We need to add references to the Microsoft.Azure.Services.AppAuthentication and Microsoft.Azure.KeyVault NuGet packages to our application.

Now we can get the access token to communicate to the key vault in our startup like following:


using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Azure.KeyVault;

// ...

var azureServiceTokenProvider = new AzureServiceTokenProvider();

string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://management.azure.com/");

// OR

var kv = new KeyVaultClient(new KeyVaultClient
.AuthenticationCallback
(azureServiceTokenProvider.KeyVaultTokenCallback));

This is neat, agree? We now have our application configuration file that has no secrets or keys whatsoever. Isn’t it cool?

Step up – activating zero-secret mode

We have managed deploying our web application with zero secret above. However, we still have secrets for SQL database, storage accounts etc. in our key vault, we just don’t have to put them in our configuration files. But they are still there and loaded in startup event of our web application. This is a great improvement, of course. But MSI allows us to take this even better stage.

Azure AD Authentication for Azure Services

To leverage MSI’s full potentials we should use Azure AD authentication (RBAC controls). For instance, we have been using Shared Access Signatures or SQL connection strings to communicate Azure Storage/Service Bus and SQL servers. With AD authentication, we will use a security principal that has a role assignment with Azure RBAC.

Azure gradually enabling AD authentication for resources. As of today (time of writing this blog) the following services/resources supports AD authentication with Managed Service Identity.

Service Resource ID Status Date Assign access
Azure Resource Manager https://management.azure.com/ Available September 2017 Azure portal
PowerShell
Azure CLI
Azure Key Vault https://vault.azure.net Available September 2017
Azure Data Lake https://datalake.azure.net/ Available September 2017
Azure SQL https://database.windows.net/ Available October 2017
Azure Event Hubs https://eventhubs.azure.net Available December 2017
Azure Service Bus https://servicebus.azure.net Available December 2017
Azure Storage https://storage.azure.com/ Preview May 2018

Read more updated info here.

AD authentication finally allows us to completely remove those secrets from Key vaults and directly access to the storage account, Data lake stores, SQL servers with MSI tokens. Let’s see some examples to understand this.

Example: Accessing Storage Queues with MSI

In our earlier example, we talked about the Azure web app, for which we have enabled Managed Service Identity. In this example we will see how we can put a message in Azure Storage Queue using MSI. Assuming our web application name is:

contoso-msi-web-app

Once we have enabled the managed service identity for this web app, Azure provisioned an identity (an AD Application object and a Service Principal for it) with the same name as the web application, i.e. contoso-msi-web-app.

Now we need to set role assignment for this Service Principal so that it can access to the storage account. We can do that in Azure Portal. Go to the Azure Portal IAM blade (the access control page) and add a role for this principal to the storage account. Of course, you can also do that with Power-Shell.

If you are not doing it in Portal, you need to know the ID of the MSI. Here’s how you get that: (in Azure CLI console)


az resource show -n $webApp -g $resourceGroup
--resource-type Microsoft.Web/sites --query identity

You should see an output like following:

{
"principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tenantId": "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx",
"type": null
}

The Principal ID is what you are after. We can now assign roles for this principal as follows:

$exitingRoleDef = Get-AzureRmRoleAssignment `
                -ObjectId `
                -RoleDefinitionName "Contributor"  `
                -ResourceGroupName "RGP NAME"
            If ($exitingRoleDef -eq $null) {
                New-AzureRmRoleAssignment `
                    -ObjectId  `
                    -RoleDefinitionName "Contributor" `
                    -ResourceGroupName "RGP NAME"
            }

You can run these commands in CD pipeline with Azure Inline Power Shell tasks in VSTS release pipelines.

Let’s write a MSI token helper class.

internal class TokenHelper
{
internal async Task<string> GetManagementApiAccessTokenAsync()
{
var astp = new AzureServiceTokenProvider();
var accessToken = await astp
.GetAccessTokenAsync(Constants.AzureManagementAPI);
return accessToken;
}
}
view raw TokenHelper hosted with ❤ by GitHub

We will use the Token Helper in a Storage Account helper class.

internal class StorageAccountHelper
{
internal async Task<StorageKeys> GetStorageKeysAsync()
{
var token = await new TokenHelper().GetManagementApiAccessTokenAsync();
return await GetStorageKeysAsync(token);
}
internal async Task<StorageKeys> GetStorageKeysAsync(string token)
{
var uri = new Uri($"{Constants.AzureManagementAPI}subscriptions/{Constants.SubscriptionID}/resourceGroups/{Constants.ResourceGroupName}/providers/Microsoft.Storage/storageAccounts/{Constants.StorageAccountName}/listKeys?api-version=2016-01-01");
var content = new StringContent(string.Empty, Encoding.UTF8, "text/html");
using (var httpClient = new HttpClient())
{
httpClient.DefaultRequestHeaders.Authorization
= new AuthenticationHeaderValue("Bearer", token);
using (var response = await httpClient.PostAsync(uri, content))
{
var responseText = await response.Content.ReadAsStringAsync();
var keys = JsonConvert.DeserializeObject<StorageKeys>(responseText);
return keys;
}
}
}
}

Now, let’s write a message into the Storage Queue.

public class MessageClient
{
public MessageClient()
{
}
public virtual async Task SendAsync(string message)
{
var cq = await GetQueueClient();
await cq.AddMessageAsync(new CloudQueueMessage(message));
}
private static async Task<CloudQueue> GetQueueClient( )
{
var keys = await new StorageAccountHelper().GetStorageKeysAsync();
var storageCredentials =
new StorageCredentials(Constants.StorageAccountName, keys.Keys.First().Value);
var csa = new CloudStorageAccount(storageCredentials, true);
var cqc = csa.CreateCloudQueueClient();
var cq = cqc.GetQueueReference(Constants.QueueName);
await cq.CreateIfNotExistsAsync();
return cq;
}
}
view raw MessageClient.cs hosted with ❤ by GitHub

Isn’t it awesome?

Another example, this time SQL server

As of now, Azure SQL Database does not support creating logins or users from service principals created from Managed Service Identity. Fortunately, we have workaround. We can add the MSI principal an AAD group as member, and then grant access to the group to the database.

We can use the Azure CLI to create the group and add our MSI to it:

az ad group create --display-name sqlusers --mail-nickname 'NotNeeded'az ad group member add -g sqlusers --member-id xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Again, we are using the MSI id as member id parameter here.
Next step, we need to allow this group to access SQL database. PowerShell rescues again:

$query = @"CREATE USER [$adGroupName] FROM EXTERNAL PROVIDER
GO
ALTER ROLE db_owner ADD MEMBER [$adGroupName]
"@
sqlcmd.exe -S "tcp:$sqlServer,1433" `
-N -C -d $database -G -U $sqlAdmin.UserName `
-P $sqlAdmin.GetNetworkCredential().Password `
-Q $query

Let’s write a token helper class for SQL as we did before for storage queue.

public static class TokenHelper
{
public static Task<String> GetTokenAsync()
{
var provider = new AzureServiceTokenProvider();
return provider.GetAccessTokenAsync("https://database.windows.net/");
}
}
public static class SqlConnectionExtensions
{
public async static Task<TPayload> ExecuteScalar<TPayload>(string commandText)
where TPayload: class
{
var connectionString = "connection string without credentails";
var token = await ADAuthentication.GetSqlTokenAsync();
using (var conn = new SqlConnection(connectionString))
{
conn.AccessToken = token;
await conn.OpenAsync();
using (var cmd = new SqlCommand(commandText, conn))
{
var result = await cmd.ExecuteScalarAsync();
return result as TPayload;
}
}
}
}
view raw SQLHelper.cs hosted with ❤ by GitHub

We are almost done, now we can run SQL commands from web app like this:

public class WebApp
{
public async static void StartUp()
{
var userName = await SqlConnectionExtensions
.ExecuteScalar<string>("SELECT SUSER_SNAME()");
}
}
view raw SQLClient hosted with ❤ by GitHub

Voila!

Conclusion

Managed Service Identity is awesome and powerful, it really drives application where security of the application are easy to manage over longer period. Specially when you have lots of applications you end up with huge number of service principals. Managing their secrets over time, keeping track of their expiration is a nightmare. Managed Service makes it so beautiful!

 

Thanks for reading!