OpenSSL as Service

OpenSSL is awesome! Though, requires little manual work to remember all the commands, executing them in a machine that has OpenSSL installed. In this post, I’m about to build an HTTP API over OpenSSL, with the most commonly used commands (and the possibility to extend it further – as required). This will help folks who wants to run OpenSSL in a private network but wants to orchestrate it in their automation workflows.


Ever wanted to automate the TLS (also known as SSL) configuration process for your web application? You know, the sites that served via HTTPS and Chrome shows a green “secure” mark in address bar. Serving site over HTTP is insecure (even for static contents) and major browsers will mark those sites as not secure, Chrome already does that today.

Serving contents via HTTPS involves buying a digital certificate (aka SSL/TLS certificate) from certificate authorities (CA). The process seemed complicated (sometimes expensive too) by many average site owners or developers. Let’s encrypt addressed this hardship and made it painless. It’s an open certificate authority that provides free TLS certificates in an automated and elegant way.

However, free certificates might not be ideal for enterprise scenarios. Enterprise might have a requirement to buy certificate from a specific CA. In many cases, that process is manual and often complicated and slow. Typically, the workflow starts by generating a Certificate Signing request (also known as CSR) which requires generating asymmetric key pair (a public and private key pair). Which is then sent to CA to get a Digital Identity certificate. This doesn’t stop here. Once the certificate is provided by the CA, sometimes (Specially if you are in IIS, .net or Azure world) it’s needed to be converted to a PFX (Personal Information Exchange) file to deploy the certificate to the web server.

PFX (aka PKCS #12) is a file format defines an archive file format for storing many cryptography objects as a single file. It’s used to bundle a private key with it’s X.509 certificate or bundling all the members of a chain of trust. This file may be encrypted and signed. The internal storage containers (aka SafeBags), may also be encrypted and signed.

Generating CSR, converting a Digital Identity certificate to PFX format are often done manually. There are some online services that allows you generating CSRs – via an API or an UI. These are very useful and handy, but not the best fit for an enterprise. Because the private keys need to be shared with the online provider – to generate the CSR. Which leads people to use the vastly popular utility – OpenSSL in their local workstation – generating CSRs. In this article, this is exactly what I am trying to avoid. I wanted to have an API over OpenSSL – so that I can invoke it from my other automation workflow running in the Cloud.

Next, we will see how we can expose the OpenSSL over HTTP API in a Docker container, so we can run the container in our private enterprise network and orchestrate this in our certificate automation workflows.

The Solution Design

We will write a .net core web app, exposing the OpenSSL command via web API. Web API requests will fork OpenSSL process with the command and will return the outcome as web API response.

OpenSSL behind .net core Web API

We are using System.Diagnostics.Process to lunch OpenSSL in our code. This is assuming we will have OpenSSL executable present in our path. Which we will ensure soon with Docker.

        private static StringBuilder ExecuteOpenSsl(string command)
            var logs = new StringBuilder();
            var executableName = "openssl";
            var processInfo = new ProcessStartInfo(executableName)
                Arguments = command,
                UseShellExecute = false,
                RedirectStandardError = true,
                RedirectStandardOutput = true,
                CreateNoWindow = true

            var process = Process.Start(processInfo);
            while (!process.StandardOutput.EndOfStream)
            return logs;

This is simply kicking off OpenSSL executable with a command and capturing the output (or errors). We can now use this in our Web API controller.

    /// <summary>
    /// The Open SSL API
    /// </summary>
    public class OpenSslController : Controller
        /// <summary>
        /// Creates a new CSR
        /// </summary>
        /// Payload info
        /// The CSR with private key
        public async Task Csr([FromBody] CsrRequestPayload payload)
            var response = await CertificateManager.GenerateCSRAsync(payload);
            return new JsonResult(response);

This snippet only shows one example, where we are receiving a CSR generation request and using the OpenSSL to generate, returning the CSR details (in a base64 encoded string format) as API response.

Other commands are following the same model, so skipping them here.

Building Docker Image

Above snippet assumes that we have OpenSSL installed in the machine and the executable’s path is registered in our system’s path. We will turn that assumption to a fact by installing OpenSSL in our Docker image.

FROM microsoft/aspnetcore:2.0 AS base

RUN apt-get update -y
RUN apt-get install openssl

Here we are using aspnetcore:2.0 as our base image (which is a Linux distribution) and installing OpenSSL right after.

Let’s Run it!

I have built the docker image and published it to Docker Hub. All we need is to run it:


The default port of the web API is 80, though in this example we will run it on 8080. Let’s open a browser pointing to:


Voila! We have our API’s. Here’s the Swagger UI for the web API.


And we can test our CSR generation API via Postman:


The complete code for this web app with Docker file can be found in this GitHub Repository. The Docker image is in Docker Hub.

Thanks for reading.

Azure Web App – Removing IP Restrictions

Azure Web App allows us to configure IP Restrictions (same goes for Azure Functions, API apps) . This allows us to define a priority ordered allow/deny list of IP addresses as access rules for our app. The allow list can include IPv4 and IPv6 addresses.

IP restrictions flow

Source: MSDN

Developers often run into scenarios when they want to do programmatic manipulations in these restriction rules. Adding or removing IP restrictions from Portal is easy and documented here. We can also manipulate them with ARM templates, like following:

"ipSecurityRestrictions": [
"ipAddress": "",
"action": "Allow",
"tag": "Default",
"priority": 100,
"name": "allowed access"

However, sometimes it’s handy to do this in Power Shell scripts – that can be executed as a Build/Release task in CI/CD pipeline or other environments – when we can add IP restrictions with some scripts and/or remove some restriction rules. Google finds quite some blog posts that show how to add IP restrictions, but not a lot for removing a restriction.

In this post, I will present a complete Power Shell script that will allows us do the following:

  • Add an IP restriction
  • View the IP restrictions
  • Remove all IP Restrictions


function Add-AzureRmWebAppIPRestrictions {

    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]
    $WebAppConfig = (Get-AzureRmResource -ResourceType Microsoft.Web/sites/config -ResourceName $WebAppName -ResourceGroupName $ResourceGroupName -ApiVersion $APIVersion)
    $IpSecurityRestrictions = $WebAppNameConfig.Properties.ipsecurityrestrictions

    if ($ipAddress -in $IpSecurityRestrictions.ipAddress) {
        "$IPAddress is already restricted in $WebAppName."
    else {
        $webIP = [PSCustomObject]@{ipAddress = ''; subnetMask = ''; Priority = 300}
        $webIP.ipAddress = $ipAddress
        $webIP.subnetMask = $Mask
        if($null -eq $IpSecurityRestrictions){
            $IpSecurityRestrictions = @()

        [System.Collections.ArrayList]$list = $IpSecurityRestrictions
        $list.Add($webIP) | Out-Null

        $ = $list
        $WebAppConfig | Set-AzureRmResource  -ApiVersion $APIVersion -Force | Out-Null
        Write-Output "New restricted IP address $IPAddress has been added to WebApp $WebAppName"


function Get-AzureRmWebAppIPRestrictions {
        [string] $WebAppName,
        [string] $ResourceGroupName
    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]

    $WebAppConfig = (Get-AzureRmResource -ResourceType Microsoft.Web/sites/config -ResourceName  $WebAppName -ResourceGroupName $ResourceGroupName -ApiVersion $APIVersion)
    $IpSecurityRestrictions = $WebAppConfig.Properties.ipsecurityrestrictions
    if ($null -eq $IpSecurityRestrictions) {
        Write-Output "$WebAppName has no IP restrictions."
    else {
        Write-Output "$WebAppName IP Restrictions: "


function  Remove-AzureRmWebAppIPRestrictions {
    param (
    $APIVersion = ((Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web).ResourceTypes | Where-Object ResourceTypeName -eq sites).ApiVersions[0]

    $r = Get-AzureRmResource -ResourceGroupName $ResourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName "$WebAppName/web" -ApiVersion $APIVersion
    $p = $r.Properties
    $p.ipSecurityRestrictions = @()
    Set-AzureRmResource -ResourceGroupName  $ResourceGroupName -ResourceType Microsoft.Web/sites/config -ResourceName "$WebAppName/web" -ApiVersion $APIVersion -PropertyObject $p -Force
And finally, to test them:
function  Test-Everything {
    if (!(Get-AzureRmContext)) {
        Write-Output "Please login to your Azure account"

    Get-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"

    Remove-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name" 

    Set-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"  -IPAddress "" -Mask ""

    Get-AzureRmWebAppIPRestrictions -WebAppName "my-app" -ResourceGroupName "my-rg-name"

Thanks for reading!

CQRS and ES on Azure Table Storage

Lately I was playing with Event Sourcing and command query responsibility segregation (aka CQRS) pattern on Azure Table storage. Thought of creating a lightweight library that facilitates writing such applications. I ended up with a Nuget package to do this. here is the GitHub Repository.

A lightweight CQRS supporting library with Event Store based on Azure Table Storage.

Quick start guide


Install the SuperNova.Storage Nuget package into the project.

Install-Package SuperNova.Storage -Version 1.0.0

The dependencies of the package are:

  • .NETCoreApp 2.0
  • Microsoft.Azure.DocumentDB.Core (>= 1.7.1)
  • Microsoft.Extensions.Logging.Debug (>= 2.0.0)
  • SuperNova.Shared (>= 1.0.0)
  • WindowsAzure.Storage (>= 8.5.0)

Implemention guide

Write Side – Event Sourcing

Once the package is installed, we can start sourcing events in an application. For example, let’s start with a canonical example of UserController in a Web API project.

We can use the dependency injection to make EventStore avilable in our controller.

Here’s an example where we register an instance of Event Store with DI framework in our Startup.cs

// Config object encapsulates the table storage connection string
services.AddSingleton(new EventStore( ... provide config ));

Now the controller:

public class UsersController : Controller
public UsersController(IEventStore eventStore)
this.eventStore = eventStore; // Here capture the event store handle

... other methods skipped here


Implementing event sourcing becomes way much handier, when it’s fostered with Domain Driven Design (aka DDD). We are going to assume that we are familiar with DDD concepts (especially Aggregate Roots).

An aggregate is our consistency boundary (read as transactional boundary) in Event Sourcing. (Technically, Aggregate ID’s are our partition keys on Event Store table – therefore, we can only apply an atomic operation on a single aggregate root level.)

Let’s create an Aggregate for our User domain entity:

using SuperNova.Shared.Messaging.Events.Users;
using SuperNova.Shared.Supports;

public class UserAggregate : AggregateRoot
private string _userName;
private string _emailAddress;
private Guid _userId;
private bool _blocked;

Once we have the aggregate class written, we should come up with the events that are relevant to this aggregate. We can use Event storming to come up with the relevant events.

Here are the events that we will use for our example scenario:

public class UserAggregate : AggregateRoot

... skipped other codes

#region Apply events
private void Apply(UserRegistered e)
this._userId = e.AggregateId;
this._userName = e.UserName;
this._emailAddress = e.Email;

private void Apply(UserBlocked e)
this._blocked = true;

private void Apply(UserNameChanged e)
this._userName = e.NewName;

... skipped other codes

Now that we have our business events defined, we will define our commands for the aggregate:

public class UserAggregate : AggregateRoot
#region Accept commands
public void RegisterNew(string userName, string emailAddress)
Ensure.ArgumentNotNullOrWhiteSpace(userName, nameof(userName));
Ensure.ArgumentNotNullOrWhiteSpace(emailAddress, nameof(emailAddress));

ApplyChange(new UserRegistered
AggregateId = Guid.NewGuid(),
Email = emailAddress,
UserName = userName

public void BlockUser(Guid userId)
ApplyChange(new UserBlocked
AggregateId = userId

public void RenameUser(Guid userId, string name)
Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

ApplyChange(new UserNameChanged
AggregateId = userId,
NewName = name

... skipped other codes

So far so good!

Now we will modify the web api controller to send the correct command to the aggregate.

public class UserPayload 
public string UserName { get; set; }
public string Email { get; set; }

// POST: User
public async Task Post(Guid projectId, [FromBody]UserPayload user)
Ensure.ArgumentNotNull(user, nameof(user));

var userId = Guid.NewGuid();

await eventStore.ExecuteNewAsync(
Tenant, "user_event_stream", userId, async () => {

var aggregate = new UserAggregate();

aggregate.RegisterNew(user.UserName, user.Email);

return await Task.FromResult(aggregate);

return new JsonResult(new { id = userId });

And another API to modify existing users into the system:

//PUT: User
public async Task Put(Guid projectId, Guid userId, [FromBody]string name)
Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

await eventStore.ExecuteEditAsync(
Tenant, "user_event_stream", userId,
async (aggregate) =>
aggregate.RenameUser(userId, name);

await Task.CompletedTask;

return new JsonResult(new { id = userId });

That’s it! We have our WRITE side completed. The event store is now contains the events for user event stream.


Read Side – Materialized Views

We can consume the events in a seperate console worker process and generate the materialized views for READ side.

The readers (the console application – Azure Web Worker for instance) are like feed processor and have their own lease collection that makes them fault tolerant and resilient. If crashes, it catches up form the last event version that was materialized successfully. It’s doing a polling – instead of a message broker (Service Bus for instance) on purpose, to speed up and avoid latencies during event propagation. Scalabilities are ensured by means of dedicating lease per tenants and event streams – which provides pretty high scalability.

How to listen for events?

In a worker application (typically a console application) we will listen for events:

private static async Task Run()
var eventConsumer = new EventStreamConsumer(
... skipped for simplicity

await eventConsumer.RunAndBlock((evts) =>
foreach (var @evt in evts)
if (evt is UserRegistered userAddedEvent)
readModel.AddUserAsync(new UserDto
UserId = userAddedEvent.AggregateId,
Name = userAddedEvent.UserName,
Email = userAddedEvent.Email
}, evt.Version);

else if (evt is UserNameChanged userChangedEvent)
readModel.UpdateUserAsync(new UserDto
UserId = userChangedEvent.AggregateId,
Name = userChangedEvent.NewName
}, evt.Version);

}, CancellationToken.None);

static void Main(string[] args)

Now we have a document collection (we are using Cosmos Document DB in this example for materialization but it could be any database essentially) that is being updated as we store events in event stream.


The library is very light weight and havily influenced by Greg’s event store model and aggreagate model. Feel free to use/contribute.

Thank you!