OpenSSL as Service

OpenSSL is awesome! Though, requires little manual work to remember all the commands, executing them in a machine that has OpenSSL installed. In this post, I’m about to build an HTTP API over OpenSSL, with the most commonly used commands (and the possibility to extend it further – as required). This will help folks who wants to run OpenSSL in a private network but wants to orchestrate it in their automation workflows.

Background

Ever wanted to automate the TLS (also known as SSL) configuration process for your web application? You know, the sites that served via HTTPS and Chrome shows a green “secure” mark in address bar. Serving site over HTTP is insecure (even for static contents) and major browsers will mark those sites as not secure, Chrome already does that today.

Serving contents via HTTPS involves buying a digital certificate (aka SSL/TLS certificate) from certificate authorities (CA). The process seemed complicated (sometimes expensive too) by many average site owners or developers. Let’s encrypt addressed this hardship and made it painless. It’s an open certificate authority that provides free TLS certificates in an automated and elegant way.

However, free certificates might not be ideal for enterprise scenarios. Enterprise might have a requirement to buy certificate from a specific CA. In many cases, that process is manual and often complicated and slow. Typically, the workflow starts by generating a Certificate Signing request (also known as CSR) which requires generating asymmetric key pair (a public and private key pair). Which is then sent to CA to get a Digital Identity certificate. This doesn’t stop here. Once the certificate is provided by the CA, sometimes (Specially if you are in IIS, .net or Azure world) it’s needed to be converted to a PFX (Personal Information Exchange) file to deploy the certificate to the web server.

PFX (aka PKCS #12) is a file format defines an archive file format for storing many cryptography objects as a single file. It’s used to bundle a private key with it’s X.509 certificate or bundling all the members of a chain of trust. This file may be encrypted and signed. The internal storage containers (aka SafeBags), may also be encrypted and signed.

Generating CSR, converting a Digital Identity certificate to PFX format are often done manually. There are some online services that allows you generating CSRs – via an API or an UI. These are very useful and handy, but not the best fit for an enterprise. Because the private keys need to be shared with the online provider – to generate the CSR. Which leads people to use the vastly popular utility – OpenSSL in their local workstation – generating CSRs. In this article, this is exactly what I am trying to avoid. I wanted to have an API over OpenSSL – so that I can invoke it from my other automation workflow running in the Cloud.

Next, we will see how we can expose the OpenSSL over HTTP API in a Docker container, so we can run the container in our private enterprise network and orchestrate this in our certificate automation workflows.

The Solution Design

We will write a .net core web app, exposing the OpenSSL command via web API. Web API requests will fork OpenSSL process with the command and will return the outcome as web API response.

OpenSSL behind .net core Web API

We are using System.Diagnostics.Process to lunch OpenSSL in our code. This is assuming we will have OpenSSL executable present in our path. Which we will ensure soon with Docker.

        private static StringBuilder ExecuteOpenSsl(string command)
        {
            var logs = new StringBuilder();
            var executableName = "openssl";
            var processInfo = new ProcessStartInfo(executableName)
            {
                Arguments = command,
                UseShellExecute = false,
                RedirectStandardError = true,
                RedirectStandardOutput = true,
                CreateNoWindow = true
            };

            var process = Process.Start(processInfo);
            while (!process.StandardOutput.EndOfStream)
            {
                logs.AppendLine(process.StandardOutput.ReadLine());
            }
            logs.AppendLine(process.StandardError.ReadToEnd());
            return logs;
        }

This is simply kicking off OpenSSL executable with a command and capturing the output (or errors). We can now use this in our Web API controller.

    /// <summary>
    /// The Open SSL API
    /// </summary>
    [Produces("application/json")]
    [Route("api/OpenSsl")]
    public class OpenSslController : Controller
    {
        /// <summary>
        /// Creates a new CSR
        /// </summary>
        /// Payload info
        /// The CSR with private key
        [HttpPost("CSR")]
        public async Task Csr([FromBody] CsrRequestPayload payload)
        {
            var response = await CertificateManager.GenerateCSRAsync(payload);
            return new JsonResult(response);
        }

This snippet only shows one example, where we are receiving a CSR generation request and using the OpenSSL to generate, returning the CSR details (in a base64 encoded string format) as API response.

Other commands are following the same model, so skipping them here.

Building Docker Image

Above snippet assumes that we have OpenSSL installed in the machine and the executable’s path is registered in our system’s path. We will turn that assumption to a fact by installing OpenSSL in our Docker image.

FROM microsoft/aspnetcore:2.0 AS base

RUN apt-get update -y
RUN apt-get install openssl

Here we are using aspnetcore:2.0 as our base image (which is a Linux distribution) and installing OpenSSL right after.

Let’s Run it!

I have built the docker image and published it to Docker Hub. All we need is to run it:

Untitled-1

The default port of the web API is 80, though in this example we will run it on 8080. Let’s open a browser pointing to:

http:localhost:8080/ 

Voila! We have our API’s. Here’s the Swagger UI for the web API.

swagger

And we can test our CSR generation API via Postman:

Postman

The complete code for this web app with Docker file can be found in this GitHub Repository. The Docker image is in Docker Hub.

Thanks for reading.

Resilient Azure Data Lake Analytics (ADLA) Jobs with Azure Functions

Azure Data Lake Analytics is an on-demand analytics job service that allows writing queries to transform data and grab insights efficiently. The analytics service can handle jobs of any scale instantly by setting the dial for how much power you need.

JObs

In many organizations, these jobs could play a crucial role and reliability of these job executions could be business critical. Lately I have encountered a scenario where a particular USQL job has failed with following error message:

Usql – Job failed due to internal system error – NM_CANNOT_LAUNCH_JM

A bit of research on Google revealed, it’s a system error, which doesn’t leave a lot of diagnostic clue to reason out. Retrying this job manually (by button clicking on portal) yielded success! Which makes it a bit unpredictable and uncertain. However, uncertainty like this is sort of norm while developing Software for Cloud. We all read/heard about Chaos Monkeys of Netflix.

What is resiliency?

Resiliency is the capability to handle partial failures while continuing to execute and not crash. In modern application architectures — whether it be micro services running in containers on-premises or applications running in the cloud — failures are going to occur. For example, applications that communicate over networks (like services talking to a database or an API) are subject to transient failures. These temporary faults cause lesser amounts of downtime due to timeouts, overloaded resources, networking hiccups, and other problems that come and go and are hard to reproduce. These failures are usually self-correcting. (Source)
Today I will present an approach that mitigated this abrupt job failure.

The Solution Design

Basically, I wanted to have a job progress watcher, waiting to see a failed job and then resubmit that job as a retry-logic. Also, don’t want to retry more than once, which has potential to repeat a forever-failure loop. I can have my watcher running at a frequency – like every 5 minutes or so.

Azure Functions

Azure Functions continuously impressing me for its lightweight built and consumption-based pricing model. Functions can run with different triggers, among them time schedule trigger- that perfectly fits my purpose.

Prerequisites

The function app needs to retrieve failed ADLA jobs and resubmit them as needed. This can be achieved with the Microsoft.Azure.Management.DataLake.Analytics, Version=3.0.0.0 NuGet package. We will also require Microsoft.Rest. ClientRuntime.Azure.Authentication, Version=2.0.0.0 NuGet package for Access Token retrievals.

Configuration

We need a Service Principal to be able to interact with ADLA instance on Azure. Managed Service Identity (written about it before) can also be used to make it secret less. However, in this example I will use Service Principal to keep it easier to understand. Once we have our Service Principal, we need to configure them in Function Application Settings.

Hacking the function

[FunctionName("FN_ADLA_Job_Retry")]

public static void Run([TimerTrigger("0 0 */2 * * *")]TimerInfo myTimer, TraceWriter log)

{

var accountName = GetEnvironmentVariable("ADLA_NAME");

var tenantId = GetEnvironmentVariable("TENANT_ID");

var clientId = GetEnvironmentVariable("SERVICE_PRINCIPAL_ID");

var clientSecret = GetEnvironmentVariable("SERVICE_PRINCIPAL_SECRET");

 

ProcessFailedJobsAsync(tenantId, clientId, clientSecret, accountName).Wait();

}

That’s our Azure Function scheduled to be run every 2 hours. Once we get a trigger, we retrieve the AD tenant ID, Service Principal ID, secret and the account name of target ADLA.

Next thing we do, write a method that will give us a ADLA REST client – authenticated with Azure AD, ready to make a call to ADLA account.

private static async Task GetAdlaClientAsync(

string clientId, string clientSecret, string tenantId)

{

var creds = new ClientCredential(clientId, clientSecret);

var clientCreds = await ApplicationTokenProvider

.LoginSilentAsync(tenantId, creds);

 

var adlsClient = new DataLakeAnalyticsJobManagementClient(clientCreds);

return adlsClient;

}

The DataLakeAnalyticsJobManagementClient class comes from Microsoft.Azure.Management.DataLake.Analytics, Version=3.0.0.0 NuGet package that we have already installed into our project.

Next, we will write a method that will get us all the failed jobs,

private static async Task<Microsoft.Rest.Azure.IPage>

GetFailedJobsAsync(string accountName, DataLakeAnalyticsJobManagementClient client)

{

// We are ignoring the data pages that has older jobs

// If that's important to you, use CancellationToken to retrieve those pages

return await client.Job

.ListAsync(accountName,

new ODataQuery(job => job.Result == JobResult.Failed));

}

We have now the capability to retrieve failed jobs, great! Now we should write the real logic that will check for failed jobs that never been retried and resubmit them.

private const string RetryJobPrefix = "RETRY-";

public static async Task ProcessFailedJobsAsync(

string tenantId, string clientId, string clientSecret, string accountName)

{

var client = await GetAdlaClientAsync(clientId, clientSecret, tenantId);

 

var failedJobs = await GetFailedJobsAsync(accountName, client);

 

foreach (var failedJob in failedJobs)

{

// If it's a retry attempt we will not kick this off again.

if (failedJob.Name.StartsWith(RetryJobPrefix)) continue;

 

// we will retry this with a name prefixed with a RETRY

var retryJobName = $"{RetryJobPrefix}{failedJob.Name}";

 

// Before we kick this off again, let's check if we already have retried this before..

if (!(await HasRetriedBeforeAsync(accountName, client, retryJobName)))

{

var jobDetails = await client.Job.GetAsync(accountName, failedJob.JobId.Value);

var newJobID = Guid.NewGuid();

 

var properties = new USqlJobProperties(jobDetails.Properties.Script);

var parameters = new JobInformation(

retryJobName,

JobType.USql, properties,

priority: failedJob.Priority,

degreeOfParallelism: failedJob.DegreeOfParallelism,

jobId: newJobID);

 

// resubmit this job now

await client.Job.CreateAsync(accountName, newJobID, parameters);

}

}

}

private async static Task HasRetriedBeforeAsync(string accountName,

DataLakeAnalyticsJobManagementClient client, string name)

{

var jobs = await client.Job

.ListAsync(accountName,

new ODataQuery(job => job.Name == name));

 

return jobs.Any();

}

This is it all!

Final thoughts!

We can’t avoid failures, but we can respond in ways that will keep our system up or at least minimize downtime. In this example, when one Job fails unpredictably, its effects can cause the system to fail.

We should build our own mitigation against these uncertain factors – with automation.

Deploying Azure web job written in .net core

Lately I have written a .net core web job and wanted to publish it via CD (continuous deployment) from Visual Studio Online. Soon I figured, Azure Web Job SDK doesn’t support (yet) .net core. The work I expected will take 10 mins took about an hour.

If you are also figuring out this, this blog post is what you are looking for.

I will describe the steps and provide a PowerShell script that does the deployment via Kudu API. Kudu is the Source Control management for Azure app services, which has a Zip API that allows us to deploy zipped folder into an Azure app service.

Here are the steps you need to follow. You can start by creating a simple .net core console application. Add a Power Shell file into the project that will do the deployment in your Visual Studio online release pipeline. The Power Shell script will do the following:

  • Publish the project (using dotnet publish)
  • Make a zip out of the artifacts
  • Deploy the zip into the Azure web app

Publishing the project

We will use dotnet publish command to publish our project.

$resourceGroupName = "my-regource-group"
$webAppName = "my-web-job"
$projectName = "WebJob"
$outputRoot = "webjobpublish"
$ouputFolderPath = "webjobpublish\App_Data\Jobs\Continuous\my-web-job"
$zipName = "publishwebjob.zip"

$projectFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $projectName
$outputFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $ouputFolderPath
$outputFolderTopDir = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $outputRoot
$zipPath = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $zipName

if (Test-Path $outputFolder)
  { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}
$fullProjectPath = "$projectFolder\$projectName.csproj"

dotnet publish "$fullProjectPath"
     --configuration release --output $outputFolder

Create a compressed artifact folder

We will use System.IO.Compression.Filesystem assembly to create the zip file.

Add-Type -assembly "System.IO.Compression.Filesystem"
[IO.Compression.Zipfile]::CreateFromDirectory(
        $outputFolderTopDir, $zipPath)

Upload the zip into Azure web app

Next step is to upload the zip file into the Azure web app. This is where we first need to fetch the credentials for the Azure web app and then use the Kudu API to upload the content. Here’s the script:

function Get-PublishingProfileCredentials
         ($resourceGroupName, $webAppName) {

    $resourceType = "Microsoft.Web/sites/config"
    $resourceName = "$webAppName/publishingcredentials"

    $publishingCredentials = Invoke-AzureRmResourceAction `
                 -ResourceGroupName $resourceGroupName `
                 -ResourceType $resourceType `
                 -ResourceName $resourceName `
                 -Action list `
                 -ApiVersion 2015-08-01 `
                 -Force
    return $publishingCredentials
}

function Get-KuduApiAuthorisationHeaderValue
         ($resourceGroupName, $webAppName) {

    $publishingCredentials =
      Get-PublishingProfileCredentials $resourceGroupName $webAppName

    return ("Basic {0}" -f `
        [Convert]::ToBase64String( `
        [Text.Encoding]::ASCII.GetBytes(("{0}:{1}"
           -f $publishingCredentials.Properties.PublishingUserName, `
        $publishingCredentials.Properties.PublishingPassword))))
}

$kuduHeader = Get-KuduApiAuthorisationHeaderValue `
    -resourceGroupName $resourceGroupName `
    -webAppName $webAppName

$Headers = @{
    Authorization = $kuduHeader
}

# use kudu deploy from zip file
Invoke-WebRequest `
    -Uri https://$webAppName.scm.azurewebsites.net/api/zipdeploy `
    -Headers $Headers `
    -InFile $zipPath `
    -ContentType "multipart/form-data" `
    -Method Post

# Clean up the artifacts now
if (Test-Path $outputFolder)
      { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}

PowerShell task in Visual Studio Online

Now we can leverage the Azure PowerShell task in Visual Studio Release pipeline and invoke the script to deploy the web job.

That’s it!

Thanks for reading, and have a nice day!

CQRS and ES on Azure Table Storage

Lately I was playing with Event Sourcing and command query responsibility segregation (aka CQRS) pattern on Azure Table storage. Thought of creating a lightweight library that facilitates writing such applications. I ended up with a Nuget package to do this. here is the GitHub Repository.

A lightweight CQRS supporting library with Event Store based on Azure Table Storage.

Quick start guide

Install

Install the SuperNova.Storage Nuget package into the project.

Install-Package SuperNova.Storage -Version 1.0.0

The dependencies of the package are:

  • .NETCoreApp 2.0
  • Microsoft.Azure.DocumentDB.Core (>= 1.7.1)
  • Microsoft.Extensions.Logging.Debug (>= 2.0.0)
  • SuperNova.Shared (>= 1.0.0)
  • WindowsAzure.Storage (>= 8.5.0)

Implemention guide

Write Side – Event Sourcing

Once the package is installed, we can start sourcing events in an application. For example, let’s start with a canonical example of UserController in a Web API project.

We can use the dependency injection to make EventStore avilable in our controller.

Here’s an example where we register an instance of Event Store with DI framework in our Startup.cs

// Config object encapsulates the table storage connection string
services.AddSingleton(new EventStore( ... provide config ));

Now the controller:

[Produces("application/json")]
[Route("users")]
public class UsersController : Controller
{
public UsersController(IEventStore eventStore)
{
this.eventStore = eventStore; // Here capture the event store handle
}

... other methods skipped here
}

Aggregate

Implementing event sourcing becomes way much handier, when it’s fostered with Domain Driven Design (aka DDD). We are going to assume that we are familiar with DDD concepts (especially Aggregate Roots).

An aggregate is our consistency boundary (read as transactional boundary) in Event Sourcing. (Technically, Aggregate ID’s are our partition keys on Event Store table – therefore, we can only apply an atomic operation on a single aggregate root level.)

Let’s create an Aggregate for our User domain entity:

using SuperNova.Shared.Messaging.Events.Users;
using SuperNova.Shared.Supports;

public class UserAggregate : AggregateRoot
{
private string _userName;
private string _emailAddress;
private Guid _userId;
private bool _blocked;

Once we have the aggregate class written, we should come up with the events that are relevant to this aggregate. We can use Event storming to come up with the relevant events.

Here are the events that we will use for our example scenario:

public class UserAggregate : AggregateRoot
{

... skipped other codes

#region Apply events
private void Apply(UserRegistered e)
{
this._userId = e.AggregateId;
this._userName = e.UserName;
this._emailAddress = e.Email;
}

private void Apply(UserBlocked e)
{
this._blocked = true;
}

private void Apply(UserNameChanged e)
{
this._userName = e.NewName;
}
#endregion

... skipped other codes
}

Now that we have our business events defined, we will define our commands for the aggregate:

public class UserAggregate : AggregateRoot
{
#region Accept commands
public void RegisterNew(string userName, string emailAddress)
{
Ensure.ArgumentNotNullOrWhiteSpace(userName, nameof(userName));
Ensure.ArgumentNotNullOrWhiteSpace(emailAddress, nameof(emailAddress));

ApplyChange(new UserRegistered
{
AggregateId = Guid.NewGuid(),
Email = emailAddress,
UserName = userName
});
}

public void BlockUser(Guid userId)
{
ApplyChange(new UserBlocked
{
AggregateId = userId
});
}

public void RenameUser(Guid userId, string name)
{
Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

ApplyChange(new UserNameChanged
{
AggregateId = userId,
NewName = name
});
}
#endregion


... skipped other codes
}

So far so good!

Now we will modify the web api controller to send the correct command to the aggregate.

public class UserPayload 
{
public string UserName { get; set; }
public string Email { get; set; }
}

// POST: User
[HttpPost]
public async Task Post(Guid projectId, [FromBody]UserPayload user)
{
Ensure.ArgumentNotNull(user, nameof(user));

var userId = Guid.NewGuid();

await eventStore.ExecuteNewAsync(
Tenant, "user_event_stream", userId, async () => {

var aggregate = new UserAggregate();

aggregate.RegisterNew(user.UserName, user.Email);

return await Task.FromResult(aggregate);
});

return new JsonResult(new { id = userId });
}

And another API to modify existing users into the system:

//PUT: User
[HttpPut("{userId}")]
public async Task Put(Guid projectId, Guid userId, [FromBody]string name)
{
Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

await eventStore.ExecuteEditAsync(
Tenant, "user_event_stream", userId,
async (aggregate) =>
{
aggregate.RenameUser(userId, name);

await Task.CompletedTask;
}).ConfigureAwait(false);

return new JsonResult(new { id = userId });
}

That’s it! We have our WRITE side completed. The event store is now contains the events for user event stream.

EventStore

Read Side – Materialized Views

We can consume the events in a seperate console worker process and generate the materialized views for READ side.

The readers (the console application – Azure Web Worker for instance) are like feed processor and have their own lease collection that makes them fault tolerant and resilient. If crashes, it catches up form the last event version that was materialized successfully. It’s doing a polling – instead of a message broker (Service Bus for instance) on purpose, to speed up and avoid latencies during event propagation. Scalabilities are ensured by means of dedicating lease per tenants and event streams – which provides pretty high scalability.

How to listen for events?

In a worker application (typically a console application) we will listen for events:

private static async Task Run()
{
var eventConsumer = new EventStreamConsumer(
... skipped for simplicity
"user-event-stream",
"user-event-stream-lease");

await eventConsumer.RunAndBlock((evts) =>
{
foreach (var @evt in evts)
{
if (evt is UserRegistered userAddedEvent)
{
readModel.AddUserAsync(new UserDto
{
UserId = userAddedEvent.AggregateId,
Name = userAddedEvent.UserName,
Email = userAddedEvent.Email
}, evt.Version);
}

else if (evt is UserNameChanged userChangedEvent)
{
readModel.UpdateUserAsync(new UserDto
{
UserId = userChangedEvent.AggregateId,
Name = userChangedEvent.NewName
}, evt.Version);
}
}

}, CancellationToken.None);
}

static void Main(string[] args)
{
Run().Wait();
}

Now we have a document collection (we are using Cosmos Document DB in this example for materialization but it could be any database essentially) that is being updated as we store events in event stream.

Conclusion

The library is very light weight and havily influenced by Greg’s event store model and aggreagate model. Feel free to use/contribute.

Thank you!

Prompt for Save changes in MMC 3.0 Application (C#)

Microsoft Management Console 3.0 is a managed platform to write and host application inside the standard Windows configuration console. It provides a simple, consistent and integrated management user interface and administrative console. In one of the product I am currently working with uses this MMC SDK. We used it to develop the configuration panel of administrative purposes.

That’s the quick background description of this post, however, this post is not meant for a guy who never worked with MMC SDK. I am assuming you already know all the basics about it.

In our application it has quite a number of ScopeNodes and each of them has associated details pages (in MMC terminology they are View, in our case most of them are FormView ). All of them have application data rendered in a WinForm’s UserControl. We allow user to modify those configuration data in place.
But the problem begins when user move to a different scope node and then close the MMC console window. Now the changes the user made earlier are not saved. As you already know that MMC console is an MDI application and the scope nodes are completely isolated from each others. Therefore, you can’t prompt user to save or discard the pending changes.

I googled a lot to get a solution for this, but ended up with a myriad of frustrations. Many people also faced the same problem and later they implemented with a pop-up dialog, so that they can handle the full lifecycle of saving functionalities. But that causes a lot of development works when you have too many scope nodes. You need to define 2 forms for each of them. One for the read-only display, another is a pop-up to edit the data. For my product, it was not viable really. In fact, it’s even looks nasty to get a pop-up for each node configuration.
Anyway, I finally resolved my problem by myself. It’s not a very good way to settle this issue, but it works superb for my purpose. So here is what I did to fix this problem. I have a class that will intercept the Windows messages and will take action while user is trying to close the main Console.



/// Subclassing the main window handle's WndProc method to intercept
/// the close event
internal class SubClassHWND : NativeWindow
{
private const int WM_CLOSE = 0x10;
private MySnapIn _snapIn; // MySnapIn class is derived from SnapIn (MMC SDK)
private List _childNativeWindows;

// Constructs a new instance of this class
internal SubClassHWND(MySnapIn snapIn)
{
this._snapIn = snapIn;
this._childNativeWindows = new List();
}

// Starts the hook process
internal void StartHook()
{
// get the handle
var handle = Process.GetCurrentProcess().MainWindowHandle;

if (handle != null && handle.ToInt32() > 0)
{
// assign it now
this.AssignHandle(handle);
// get the childrens
foreach (var childHandle in GetChildWindows(handle))
{
var childSubClass = new SubClassHWND(this._snapIn);

// assign this
childSubClass.AssignHandle(childHandle);

// keep the instance alive
_childNativeWindows.Add(childSubClass);
}
}
}

// The overriden windows procedure
protected override void WndProc(ref Message m)
{
if (_snapIn != null && m.Msg == WM_CLOSE)
{ // if we have a valid snapin instance
if (!_snapIn.CanCloseSnapIn(this.Handle))
{ // if we can close
return; // don't close this then
}
}
// delegate the message to the chain
base.WndProc(ref m);
}

// Requests the handle to close the window
internal static void RequestClose(IntPtr hwnd)
{
SendMessage(hwnd.ToInt32(), WM_CLOSE, 0, 0);
}

// Send a Windows Message
[DllImport("user32.dll")]
public static extern int SendMessage(int hWnd,
int Msg,
int wParam,
int lParam);


[DllImport("user32")]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool EnumChildWindows(IntPtr window, EnumWindowProc callback, IntPtr i);

public static List GetChildWindows(IntPtr parent)
{
List result = new List();
GCHandle listHandle = GCHandle.Alloc(result);
try
{
EnumWindowProc childProc = new EnumWindowProc(EnumWindow);
EnumChildWindows(parent, childProc, GCHandle.ToIntPtr(listHandle));
}
finally
{
if (listHandle.IsAllocated)
listHandle.Free();
}
return result;
}

private static bool EnumWindow(IntPtr handle, IntPtr pointer)
{
GCHandle gch = GCHandle.FromIntPtr(pointer);
List list = gch.Target as List;
if (list == null)
{
throw new InvalidCastException("GCHandle Target could not be cast as List");
}
list.Add(handle);
// You can modify this to check to see if you want to cancel the operation, then return a null here
return true;
}

public delegate bool EnumWindowProc(IntPtr hWnd, IntPtr parameter);
}

This class has the “WndProc” – windows message pump (or dispatcher method) method that will receive all the messages that are sent to the MMC console main window. Also this class will set message hook to all the child windows hosted inside the MMC MDI window.

Now we will only invoke the StartHook method from the SnapIn to active this interception hook.



// The subclassed SnapIn for my application
public class MySnapIn : SnapIn
{
internal SubClassHWND SubClassHWND
{
get;
private set;
}

protected MySnapIn()
{
// create the subclassing support
SubClassHWND = new SubClassHWND(this);
}

// Start the hook now
protected override void OnInitialize()
{
SubClassHWND.StartHook()
}

Now we have the option to do something before closing the application, like prompting with a Yes, No and Cancel dialog like NotePad does for a dirty file.



// Determins if the snapin can be closed now or not
internal bool CanCloseSnapIn(IntPtr requestWindow)
{
if (IsDirty)
{ // found a node dirty, ask user if we can
// close this dialog or not
this.BeginInvoke(new System.Action(() =>
{
using (var dlg = new SnapInCloseWarningDialog())
{
var dlgRes = Console.ShowDialog(dlg);

switch (dlgRes)
{
case DialogResult.Yes:
SaveDirtyData(); // save them here
IsDirty = false; // set to false, so next
// time the method
// will not prevent
// closing the application
SubClassHWND.RequestClose(requestWindow);
break;
case DialogResult.No:
IsDirty = false;
SubClassHWND.RequestClose(requestWindow);
break;
case DialogResult.Cancel: break;// Do nothing
}
}
}));
return false;
}
return true;
}


One small problem remains though. The dispatcher method gets the WM_CLOSE in a thread that can’t display a Window due to the fact that the current thread is not really a GUI thread. So we have to do a tricky solution there. We need to display the prompt by using a delegate (using BeginInvoke) and discard the current WM_CLOSE message that we intercepted already.
Later when a choice has been made (user selected yes, no or cancel), if they selected ‘Yes’ then we have to close the application after saving the data. If ‘no’ selected we will have to close the SnapIn as well. Only for ‘Cancel’ we don’t have to do anything. So only thing is critical is how we can close this window again. Here is how we can do that:

Notice that SnapIn’s CanCloseSnapIn method does have a parameter which is the pointer (an instance of IntPtr in this case) of the window handle that has been closed by the user. This has been done on purpose. This will offer the possiblity to send a WM_CLOSE again to that same window. So even if user closes the MDI child it will only close the child window only after save- which is just perfect!

Hope this will help somebody struggling with the same gotcha.