Deploying Azure web job written in .net core

Lately I have written a .net core web job and wanted to publish it via CD (continuous deployment) from Visual Studio Online. Soon I figured, Azure Web Job SDK doesn’t support (yet) .net core. The work I expected will take 10 mins took about an hour.

If you are also figuring out this, this blog post is what you are looking for.

I will describe the steps and provide a PowerShell script that does the deployment via Kudu API. Kudu is the Source Control management for Azure app services, which has a Zip API that allows us to deploy zipped folder into an Azure app service.

Here are the steps you need to follow. You can start by creating a simple .net core console application. Add a Power Shell file into the project that will do the deployment in your Visual Studio online release pipeline. The Power Shell script will do the following:

  • Publish the project (using dotnet publish)
  • Make a zip out of the artifacts
  • Deploy the zip into the Azure web app

Publishing the project

We will use dotnet publish command to publish our project.

$resourceGroupName = "my-regource-group"
$webAppName = "my-web-job"
$projectName = "WebJob"
$outputRoot = "webjobpublish"
$ouputFolderPath = "webjobpublish\App_Data\Jobs\Continuous\my-web-job"
$zipName = "publishwebjob.zip"

$projectFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $projectName
$outputFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $ouputFolderPath
$outputFolderTopDir = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $outputRoot
$zipPath = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $zipName

if (Test-Path $outputFolder) 
  { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}
$fullProjectPath = "$projectFolder\$projectName.csproj"

dotnet publish "$fullProjectPath" 
     --configuration release --output $outputFolder

Create a compressed artifact folder

We will use System.IO.Compression.Filesystem assembly to create the zip file.

Add-Type -assembly "System.IO.Compression.Filesystem"
[IO.Compression.Zipfile]::CreateFromDirectory(
        $outputFolderTopDir, $zipPath)

Upload the zip into Azure web app

Next step is to upload the zip file into the Azure web app. This is where we first need to fetch the credentials for the Azure web app and then use the Kudu API to upload the content. Here’s the script:

function Get-PublishingProfileCredentials
         ($resourceGroupName, $webAppName) {

    $resourceType = "Microsoft.Web/sites/config"
    $resourceName = "$webAppName/publishingcredentials"

    $publishingCredentials = Invoke-AzureRmResourceAction `
                 -ResourceGroupName $resourceGroupName `
                 -ResourceType $resourceType `
                 -ResourceName $resourceName `
                 -Action list `
                 -ApiVersion 2015-08-01 `
                 -Force 
    return $publishingCredentials
}

function Get-KuduApiAuthorisationHeaderValue
         ($resourceGroupName, $webAppName) {

    $publishingCredentials = 
      Get-PublishingProfileCredentials $resourceGroupName $webAppName

    return ("Basic {0}" -f `
        [Convert]::ToBase64String( `
        [Text.Encoding]::ASCII.GetBytes(("{0}:{1}" 
           -f $publishingCredentials.Properties.PublishingUserName, `
        $publishingCredentials.Properties.PublishingPassword))))
}

$kuduHeader = Get-KuduApiAuthorisationHeaderValue `
    -resourceGroupName $resourceGroupName `
    -webAppName $webAppName

$Headers = @{
    Authorization = $kuduHeader
}

# use kudu deploy from zip file
Invoke-WebRequest `
    -Uri https://$webAppName.scm.azurewebsites.net/api/zipdeploy `
    -Headers $Headers `
    -InFile $zipPath `
    -ContentType "multipart/form-data" `
    -Method Post

# Clean up the artifacts now
if (Test-Path $outputFolder) 
      { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}

PowerShell task in Visual Studio Online

Now we can leverage the Azure PowerShell task in Visual Studio Release pipeline and invoke the script to deploy the web job.

That’s it!

Thanks for reading, and have a nice day!

Zero-Secret application development with Azure Managed Service Identity

Committing the secrets along with application codes to a repository is one of the most commonly made mistakes by many developers. This can get nasty when an application is developed for Cloud deployment. You probably have read the story of checking in AWS S3 secrets to GitHub. The developer corrected the mistake in 5 mins, but still received a hefty invoice because of bots that crawl open source sites, looking for secrets. There are many tools that can scan codes for potential secret leakages, they can be embedded in CI/CD pipeline. These tools do a great job in finding out deliberate or unintentional commits that contains secrets before they get merged to a release/master branch. However, they are not absolutely protecting all potential secrets leaks. Developers still need to be carefully review their codes on every commits.

Azure Managed Service Instance (MSI) can address this problem in a very neat way. MSI has the potential to design application that are secret-less. There is no need to have any secrets (specially secrets for database connection strings, storage keys etc.) at all application codes.

Secret management in application

Let’s recall how we were doing secret management yesterday. Simplicity’s sake, we have a web application that is backed by a SQL server. This means, we almost certainly have a configuration key (SQL Connection String) in our configuration file. If we have storage accounts, we might have the Shared Access Signature (aka SAS token) in our config file.

As we see, we’re adding secrets one after another in our configuration file – in plain text format. We need now, credential scanner tasks in our pipelines, having some local configuration files in place (for local developments) and we need to mitigate the mistakes of checking in secrets to repository.

Azure Key Vault as secret store

Azure Key Vault can simplify these above a lot, and make things much cleaner. We can store the secrets in a Key Vault and in CI/CD pipeline, we can get them from vault and write them in configuration files, just before we publish the application code into the cloud infrastructure. VSTS build and release pipeline have a concept of Library, that can be linked with Key vault secrets, designed just to do that. The configuration file in this case should have some sort of String Placeholders that will be replaced with secrets during CD execution.

The above works great, but you still have a configuration file with all the placeholders for secrets (when you have multiple services that has secrets) – which makes it difficult to manage for local development and cloud developments. An improvement can be keep all the secrets in Key Vault, and let the application load those secrets runtime (during the startup event) directly from the Key vault. This is way easier to manage and also pretty clean solution. The local environment can use a different key vault than production, the configuration logic becomes extremely simpler and the configuration file now have only one secret. That’s a Service Principal secret – which can be used to talk to the key vault during startup.

So we get all the secrets stored in a vault and exactly one secret in our configuration file – nice! But if we accidentally commit this very single secret, all other secrets in vault are also compromised. What we can do to make this more secure? Let’s recap our knowledge about service principals before we draw the solution.

What is Service Principal?

A resource that is secured by Azure AD tenant, can only be accessed by a security principal. A user is granted access to a AD resource on his security principal, known as User Principal. When a service (a piece of software code) wants to access a secure resource, it needs to use a security principal of a Azure AD Application Object. We call them Service Principal. You can think of Service Principals as an instance of an Azure AD Application.applicationA service principal has a secret, often referred as Client Secret. This can be analogous to the password of a user principal. The Service Principal ID (often known as Application ID or Client ID) and Client Secret together can authenticate an application to Azure AD for a secure resource access. In our earlier example, we needed to keep this client secret (the only secret) in our configuration file, to gain access to the Key vault. Client secrets have expiration period that up to the application developers to renew to keep things more secure. In a large solution this can easily turn into a difficult job to keep all the service principal secrets renewed with short expiration time.

Managed Service Identity

Managed Service Identity is explained in Microsoft Documents in details. In layman’s term, MSI literally is a Service Principal, created directly by Azure and it’s client secret is stored and rotated by Azure as well. Therefore it is “managed”. If we create a Azure web app and turn on Manage Service Identity on it (which is just a toggle switch) – Azure will provision an Application Object in AD and create a Service Principal for it and store somewhere – that we don’t care. This MSI now represents the web application identity in Azure AD.msi

Managed Service Identity can be provisioned in Azure Portal, Azure Power-Shell or Azure CLI as below:

az login
az group create --name myResourceGroup --location westus
az appservice plan create --name myPlan --resource-group myResourceGroup
       --sku S1
az webapp create --name myApp --plan myPlan
       --resource-group myResourceGroup
az webapp identity assign
       --name myApp --resource-group myResourceGroup

Or via Azure Resource Manager Template:

{
"apiVersion": "2016-08-01",
"type": "Microsoft.Web/sites",
"name": "[variables('appName')]",
"location": "[resourceGroup().location]",
"identity": {
"type": "SystemAssigned"
},
"properties": {
"name": "[variables('appName')]",
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
"hostingEnvironment": "",
"clientAffinityEnabled": false,
"alwaysOn": true
},
"dependsOn": [
"[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]"
]}

Going back to our key vault example, with MSI we can now eliminate the client secret of Service Principal from our application code.

But wait! We used to read keys/secrets from Key vault during the application startup, and we needed that client secret for that. How we are going to talk to Key vault now without the secret?

Using MSI from App service

Azure provides couple of environment variables for app services that has MSI enabled.

  • MSI_ENDPOINT
  • MSI_SECRET

The first one is a URL that our application can make a request to, with the MSI_SECRET as parameter and the response will be a access token that will let us talk to the key vault. This sounds a bit complex, but fortunately we don’t need to do that by hand.

Microsoft.Azure.Services.AppAuthentication  library for .NET wraps these complexities for us and provides an easy API to get the access token returned.

We need to add references to the Microsoft.Azure.Services.AppAuthentication and Microsoft.Azure.KeyVault NuGet packages to our application.

Now we can get the access token to communicate to the key vault in our startup like following:


using Microsoft.Azure.Services.AppAuthentication;
using Microsoft.Azure.KeyVault;

// ...

var azureServiceTokenProvider = new AzureServiceTokenProvider();

string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync("https://management.azure.com/");

// OR

var kv = new KeyVaultClient(new KeyVaultClient
.AuthenticationCallback
(azureServiceTokenProvider.KeyVaultTokenCallback));

This is neat, agree? We now have our application configuration file that has no secrets or keys whatsoever. Isn’t it cool?

Step up – activating zero-secret mode

We have managed deploying our web application with zero secret above. However, we still have secrets for SQL database, storage accounts etc. in our key vault, we just don’t have to put them in our configuration files. But they are still there and loaded in startup event of our web application. This is a great improvement, of course. But MSI allows us to take this even better stage.

Azure AD Authentication for Azure Services

To leverage MSI’s full potentials we should use Azure AD authentication (RBAC controls). For instance, we have been using Shared Access Signatures or SQL connection strings to communicate Azure Storage/Service Bus and SQL servers. With AD authentication, we will use a security principal that has a role assignment with Azure RBAC.

Azure gradually enabling AD authentication for resources. As of today (time of writing this blog) the following services/resources supports AD authentication with Managed Service Identity.

Service Resource ID Status Date Assign access
Azure Resource Manager https://management.azure.com/ Available September 2017 Azure portal
PowerShell
Azure CLI
Azure Key Vault https://vault.azure.net Available September 2017
Azure Data Lake https://datalake.azure.net/ Available September 2017
Azure SQL https://database.windows.net/ Available October 2017
Azure Event Hubs https://eventhubs.azure.net Available December 2017
Azure Service Bus https://servicebus.azure.net Available December 2017
Azure Storage https://storage.azure.com/ Preview May 2018

Read more updated info here.

AD authentication finally allows us to completely remove those secrets from Key vaults and directly access to the storage account, Data lake stores, SQL servers with MSI tokens. Let’s see some examples to understand this.

Example: Accessing Storage Queues with MSI

In our earlier example, we talked about the Azure web app, for which we have enabled Managed Service Identity. In this example we will see how we can put a message in Azure Storage Queue using MSI. Assuming our web application name is:

contoso-msi-web-app

Once we have enabled the managed service identity for this web app, Azure provisioned an identity (an AD Application object and a Service Principal for it) with the same name as the web application, i.e. contoso-msi-web-app.

Now we need to set role assignment for this Service Principal so that it can access to the storage account. We can do that in Azure Portal. Go to the Azure Portal IAM blade (the access control page) and add a role for this principal to the storage account. Of course, you can also do that with Power-Shell.

If you are not doing it in Portal, you need to know the ID of the MSI. Here’s how you get that:

az resource show -n $webApp -g $resourceGroup –resource-type Microsoft.Web/sites –query identity

You should see an output like following:

{

  "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",

  "tenantId": "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx",

  "type": null

}

The Principal ID is what you are after. We can now assign roles for this principal as follows:

$exitingRoleDef = Get-AzureRmRoleAssignment `
                -ObjectId `
                -RoleDefinitionName "Contributor"  `
                -ResourceGroupName "RGP NAME"
            If ($exitingRoleDef -eq $null) {
                New-AzureRmRoleAssignment `
                    -ObjectId  `
                    -RoleDefinitionName "Contributor" `
                    -ResourceGroupName "RGP NAME"
            }

You can run these commands in CD pipeline with Azure Inline Power Shell tasks in VSTS release pipelines.

Let’s write a MSI token helper class.

We will use the Token Helper in a Storage Account helper class.

Now, let’s write a message into the Storage Queue.

Isn’t it awesome?

Another example, this time SQL server

As of now, Azure SQL Database does not support creating logins or users from service principals created from Managed Service Identity. Fortunately, we have workaround. We can add the MSI principal an AAD group as member, and then grant access to the group to the database.

We can use the Azure CLI to create the group and add our MSI to it:

az ad group create --display-name sqlusers --mail-nickname 'NotNeeded'az ad group member add -g sqlusers --member-id xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Again, we are using the MSI id as member id parameter here.
Next step, we need to allow this group to access SQL database. PowerShell rescues again:

$query = @"CREATE USER [$adGroupName] FROM EXTERNAL PROVIDER
GO
ALTER ROLE db_owner ADD MEMBER [$adGroupName]
"@
sqlcmd.exe -S "tcp:$sqlServer,1433" `
-N -C -d $database -G -U $sqlAdmin.UserName `
-P $sqlAdmin.GetNetworkCredential().Password `
-Q $query

Let’s write a token helper class for SQL as we did before for storage queue.

We are almost done, now we can run SQL commands from web app like this:

Voila!

Conclusion

Managed Service Identity is awesome and powerful, it really drives application where security of the application are easy to manage over longer period. Specially when you have lots of applications you end up with huge number of service principals. Managing their secrets over time, keeping track of their expiration is a nightmare. Managed Service makes it so beautiful!

 

Thanks for reading!

Secure Azure Web sites with Web Application Gateway wtih end-to-end SSL connections

The Problem

In order to met higher compliance demands and often as security best practices, we want to put an Azure web site behind an Web Application Firewall (aka WAF). The WAF provides known malicious security attack vectors mitigation’s defined in OWASP top 10 security vulnerabilities. Azure Application Gateway is a layer 7 load balancer that provides WAF out of the box. However, restricting a Web App access with Application Gateway is not trivial.
To achieve the best isolation and hence protection, we can provision Azure Application Service Environment (aka ASE) and put all the web apps inside the virtual network of the ASE. The is by far the most secure way to lock down a web application and other Azure resources from internet access. But ASE deployment has some other consequences, it is costly, and also, because the web apps are totally isolated and sitting in a private VNET, dev-team needs to adopt a unusual deployment pipeline to continuously deploy changes into the web apps. Which is not an ideal solution for many scenarios.
However, there’s an intermediate solution architecture that provides WAF without getting into the complexities that AES brings into the solution architecture, allowing sort of best of both worlds. The architecture looks following:

The idea is to provision an Application Gateway inside a virtual network and configure it as a reverse proxy to the Azure web app. This means, the web app should never receive traffics directly, but only through the gateway. The Gateway needs to configure with the custom domain and SSL certificates. Once a request receives, the gateway then off-load the SSL and create another SSL to the back-end web apps configured into a back-end pool. For a development purpose, the back-end apps can use the Azure wildcard certificates (*.azurewebsites.net) but for production scenarios, it’s recommended to use a custom certificate. To make sure, no direct traffic gets through the azure web apps, we also need to white-list the gateway IP address into the web apps. This will block every requests except the ones coming through the gateway.

How to do that?

I have prepared an Azure Resource Manager template into this Github repo, that will provision the following:

  • Virtual network (Application Gateway needs a Virtual network).
  • Subnet for the Application Gateway into the virtual network.
  • Public IP address for the Application Gateway.
  • An Application Gateway that pre-configured to protect any Azure Web site.

How to provision?

Before you run the scripts you need the following:
  • Azure subscription
  • Azure web site to guard with WAF
  • SSL certificate to configure the Front-End listeners. (This is the Gateway Certificate which will be approached by the end-users (browsers basically) of your apps). Typically a Personal Information Exchange (aka pfx) file.
  • The password of the pfx file.
  • SSL certificate that used to protect the Azure web sites, typically a *.cer file. This can be the *.azurewebsites.net for development purpose.
You need to fill out the parameters.json file with the appropriate values, some examples are given below:
        "vnetName": {
            "value": "myvnet"
        },
        "appGatewayName": {
            "value": "mygateway"
        },
        "azureWebsiteFqdn": {
            "value": "myapp.azurewebsites.net"
        },
        "frontendCertificateData": {
            "value": ""
        },
        "frontendCertificatePassword": {
            "value": ""
        },
        "backendCertificateData": {
            "value": ""
        }
Here, frontendCertificateData needs to be Base64 encoded content of your pfx file.
Once you have the pre-requisites, go to powershell and run:
    $> ./deploy.ps1 `
        -subscriptionId "" `
        -resourceGroupName ""
This will provision the Application Gatway in your resource group.

Important !

The final piece of work that you need to do, is to whitelist the IP address of the Application Gatway into your Azure Web App. This is to make sure, nobody can manage a direct access to your Azure web app, unless they come through the gateway only.

Contribute

Contribution is always appreciated.

CQRS and ES on Azure Table Storage

Lately I was playing with Event Sourcing and command query responsibility segregation (aka CQRS) pattern on Azure Table storage. Thought of creating a lightweight library that facilitates writing such applications. I ended up with a Nuget package to do this. here is the GitHub Repository.

A lightweight CQRS supporting library with Event Store based on Azure Table Storage.

Quick start guide

Install

Install the SuperNova.Storage Nuget package into the project.

Install-Package SuperNova.Storage -Version 1.0.0

The dependencies of the package are:

  • .NETCoreApp 2.0
  • Microsoft.Azure.DocumentDB.Core (>= 1.7.1)
  • Microsoft.Extensions.Logging.Debug (>= 2.0.0)
  • SuperNova.Shared (>= 1.0.0)
  • WindowsAzure.Storage (>= 8.5.0)

Implemention guide

Write Side – Event Sourcing

Once the package is installed, we can start sourcing events in an application. For example, let’s start with a canonical example of UserController in a Web API project.

We can use the dependency injection to make EventStore avilable in our controller.

Here’s an example where we register an instance of Event Store with DI framework in our Startup.cs

// Config object encapsulates the table storage connection string
services.AddSingleton(new EventStore( ... provide config ));

Now the controller:

[Produces("application/json")]
[Route("users")]
public class UsersController : Controller
{
public UsersController(IEventStore eventStore)
{
this.eventStore = eventStore; // Here capture the event store handle
}

... other methods skipped here
}

Aggregate

Implementing event sourcing becomes way much handier, when it’s fostered with Domain Driven Design (aka DDD). We are going to assume that we are familiar with DDD concepts (especially Aggregate Roots).

An aggregate is our consistency boundary (read as transactional boundary) in Event Sourcing. (Technically, Aggregate ID’s are our partition keys on Event Store table – therefore, we can only apply an atomic operation on a single aggregate root level.)

Let’s create an Aggregate for our User domain entity:

using SuperNova.Shared.Messaging.Events.Users;
using SuperNova.Shared.Supports;

public class UserAggregate : AggregateRoot
{
private string _userName;
private string _emailAddress;
private Guid _userId;
private bool _blocked;

Once we have the aggregate class written, we should come up with the events that are relevant to this aggregate. We can use Event storming to come up with the relevant events.

Here are the events that we will use for our example scenario:

public class UserAggregate : AggregateRoot
{

... skipped other codes

#region Apply events
private void Apply(UserRegistered e)
{
this._userId = e.AggregateId;
this._userName = e.UserName;
this._emailAddress = e.Email;
}

private void Apply(UserBlocked e)
{
this._blocked = true;
}

private void Apply(UserNameChanged e)
{
this._userName = e.NewName;
}
#endregion

... skipped other codes
}

Now that we have our business events defined, we will define our commands for the aggregate:

public class UserAggregate : AggregateRoot
{
#region Accept commands
public void RegisterNew(string userName, string emailAddress)
{
Ensure.ArgumentNotNullOrWhiteSpace(userName, nameof(userName));
Ensure.ArgumentNotNullOrWhiteSpace(emailAddress, nameof(emailAddress));

ApplyChange(new UserRegistered
{
AggregateId = Guid.NewGuid(),
Email = emailAddress,
UserName = userName
});
}

public void BlockUser(Guid userId)
{
ApplyChange(new UserBlocked
{
AggregateId = userId
});
}

public void RenameUser(Guid userId, string name)
{
Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

ApplyChange(new UserNameChanged
{
AggregateId = userId,
NewName = name
});
}
#endregion


... skipped other codes
}

So far so good!

Now we will modify the web api controller to send the correct command to the aggregate.

public class UserPayload 
{
public string UserName { get; set; }
public string Email { get; set; }
}

// POST: User
[HttpPost]
public async Task Post(Guid projectId, [FromBody]UserPayload user)
{
Ensure.ArgumentNotNull(user, nameof(user));

var userId = Guid.NewGuid();

await eventStore.ExecuteNewAsync(
Tenant, "user_event_stream", userId, async () => {

var aggregate = new UserAggregate();

aggregate.RegisterNew(user.UserName, user.Email);

return await Task.FromResult(aggregate);
});

return new JsonResult(new { id = userId });
}

And another API to modify existing users into the system:

//PUT: User
[HttpPut("{userId}")]
public async Task Put(Guid projectId, Guid userId, [FromBody]string name)
{
Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

await eventStore.ExecuteEditAsync(
Tenant, "user_event_stream", userId,
async (aggregate) =>
{
aggregate.RenameUser(userId, name);

await Task.CompletedTask;
}).ConfigureAwait(false);

return new JsonResult(new { id = userId });
}

That’s it! We have our WRITE side completed. The event store is now contains the events for user event stream.

EventStore

Read Side – Materialized Views

We can consume the events in a seperate console worker process and generate the materialized views for READ side.

The readers (the console application – Azure Web Worker for instance) are like feed processor and have their own lease collection that makes them fault tolerant and resilient. If crashes, it catches up form the last event version that was materialized successfully. It’s doing a polling – instead of a message broker (Service Bus for instance) on purpose, to speed up and avoid latencies during event propagation. Scalabilities are ensured by means of dedicating lease per tenants and event streams – which provides pretty high scalability.

How to listen for events?

In a worker application (typically a console application) we will listen for events:

private static async Task Run()
{
var eventConsumer = new EventStreamConsumer(
... skipped for simplicity
"user-event-stream",
"user-event-stream-lease");

await eventConsumer.RunAndBlock((evts) =>
{
foreach (var @evt in evts)
{
if (evt is UserRegistered userAddedEvent)
{
readModel.AddUserAsync(new UserDto
{
UserId = userAddedEvent.AggregateId,
Name = userAddedEvent.UserName,
Email = userAddedEvent.Email
}, evt.Version);
}

else if (evt is UserNameChanged userChangedEvent)
{
readModel.UpdateUserAsync(new UserDto
{
UserId = userChangedEvent.AggregateId,
Name = userChangedEvent.NewName
}, evt.Version);
}
}

}, CancellationToken.None);
}

static void Main(string[] args)
{
Run().Wait();
}

Now we have a document collection (we are using Cosmos Document DB in this example for materialization but it could be any database essentially) that is being updated as we store events in event stream.

Conclusion

The library is very light weight and havily influenced by Greg’s event store model and aggreagate model. Feel free to use/contribute.

Thank you!

3D Docker Swarm visualizer with ThreeJS

Few days before I wrote about creating a Docker Swarm Visualizer here. I have enjoyed very much writing that peice of software. However, I wanted to have a more funny way to look at the cluster. Therefore, last weekend I thought of creating a 3D viewer of the cluster that might run on a large screen, realtime showing up the machines and their workloads.

After hacking a little bit, I ended up creating a 3D perspective of the Docker swarm cluster, that shows the nodes (workers in green, managers in blue and drained nodes in red colors and micro service containers in blue small cubes, all in real-time and by leveraging the ThreeJS library. It was just an added funny view to the existing visualizer. Hope you will like it!

You need to rebuild the docker image from the docker file in order to have the 3d view enabled. I didn’t want to include this into the moimhossain/viswarm image.

Docker Swarm visualizer

VISWARM

A visual representation facilitates understanding any complex system design. Running micro-services on a cluster is no exception. Since I am involved in running and operating containers in Docker swarm cluster, I often wonder how better I could be on top of the container tasks distribution is taking place at any given moment, their health, the health of the machine underneath etc. I have found the docker-swarm-visualizer in GitHub – which does a nice job in visualizing the nodes and tasks graphically. But I was thriving to associate a little more information (i.e. image, ports, node’s IP address etc.) to the visualization. Which could help me take the follow up actions as needed. Which led me writing a swarm visualizer that does fulfil my wishes.

Here’s video glimpse for inspirations.

Well, that’s one little excuse of mine. Of course writing a visualizer is always a fun activity for me. The objective of the visualizer was following:

  • Run the visualizer as a container into the swarm
  • Real time updates when nodes added, removed, drained, failed etc.
  • Real time updates when services created, updated, replications changed.
  • Real time updates when task added, removed etc.
  • Display task images, tags to track versions (when rolling updates take place)
  • Display IP address and ports for nodes – (helps troubleshooting)

How to run

To get the visualizer up and running in your cluster


> docker run -p 9009:9009 -v /var/run/docker.sock:/var/run/docker.sock moimhossain/viswarm

How to develop

This is what you need to do in order to start hacking the codebase


> npm install

It might be the case that you have installed webpack locally and received a webpack command not recognized error. In such scenarios, you can either install webpack globally or run it from local installation by using the following command:


node_modules\.bin\webpack

Start the development server (changes will now update live in browser)


> npm run dev

To view your project, go to: http://localhost:9009/

Some of the wishes I am working on:

  • Stop tasks from the user interface
  • Update service definitions from user interface
  • Leave, drain a node from the user interface
  • Alert on crucial events like node drain, service down etc.

The application written with Express 4 on Node JS. React-Redux for user interface. Docker Remote API’s are sitting behind proxy with a Node API on top, which better prevents exposing Docker API – when IPv6 used.

If you are playing with Docker swarm clusters and reading this, I insist, give it a go, you might like it as well. And of course, it’s in GitHub, waiting for helping hands to take to the next level.

Azure template to provision Docker swarm mode cluster

What is a swarm?

The cluster management and orchestration features embedded in the Docker Engine are built using SwarmKit. Docker engines participating in a cluster are running in swarm mode. You enable swarm mode for an engine by either initializing a swarm or joining an existing swarm. A swarm is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm.

I was recently trying to come up with a script that generates the docker swarm cluster – ready to take container work loads on Microsoft Azure. I thought, Azure Container Service (ACS) should already have supported that. However, I figured, that’s not the case. Azure doesn’t support docker swarm mode in ACS yet – at least as of today (25th July 2017). Which forced me to come up with my own RM template that does the help.

What’s in it?

The RM template will provision the following resources:

  • A virtual network
  • An availability set for manager nodes
  • 3 virtual machines with the AV set created above. (the numbers, names can be parameterized as per your needs)
  • A load balancer (with public port that round-robins to the 3 VMs on port 80. And allows inbound NAT to the 3 machine via port 5000, 5001 and 5002 to ssh port 22).
  • Configures 3 VMs as docker swarm mode manager.
  • A Virtual machine scale set (VMSS) in the same VNET.
  • 3 Nodes that are joined as worker into the above swarm.
  • Load balancer for VMSS (that allows inbound NATs starts from range 50000 to ssh port 22 on VMSS)

The design can be visualized with the following diagram:

There’s a handly powershell that can help automate provisioing this resources. But you can also just click the “Deploy to Azure” button below.

Thanks!

The entire scripts can be found into this GitHub repo. Feel free to use – as needed!