Azure App Service with Front-door – how to fix outbound URLs? article shows how to rewrite IIS URLs (outbound) with URL rewrite module to configure legacy web apps hosted on Azure App Service but safeguarded with a WAF (Front-door/Application Gateway). Setting up Azure Front-door or Azure Application Gateway are fairly straight forward process and well documented in Microsoft Azure Docs. That is beyond the scope of this repository. However, when you deploy a legacy application that uses either OAuth 2.0/Open ID connect based authentications or OWIN based authentication middleware you often run into a broken AuthN flow because the application might not care about the front-door/gateway host-headers and produces a redirect (302 permanent) to a path that constructed based on App service URI’s – as oppose to the front-door/gateway URI.

Once can of course fix this by changing the code – but that sometimes might not be practical/possible. Other alternative approach (which is described below) is to catch one or more specific outbound URIs (the redirect flows) in a web.config file and use the IIS URL rewrite module to overwrite them accordingly. That would require changes in web.config but not the source codes. Which is rather cleaner approach as you can do that even from the source control management (KUDO) sites.

Why Outbound rules in URL rewrite?

If we want to make any changes in the response headers or content after the processing has been completed by the specific handler or execution engine before sending it to the client, we can use outbound rules.

How we can do this?

Here’s an example web.config file that shows some outbound rules to later HTTP headers in outbound responses.

      <!- Creating Rewrite rules -->
          <!-- The below rule captures a 302 (redirect) response 
               with 'Location' response header contains 
               an outbound URL (coming from the web app) 
               that has 'signin-oidc' in the path.  
               When there are 'signin-oidc' present into the path, it 
               will match the regular expression and rewrite the Location 
               header with the hostname that comes from 
               your front-door/application gateway URL. The notion {R:2} preserves 
               any following query parameters or sub path that was 
               present in the original URL -->
          <rule name="changeURI" enabled="true">
                ignoreCase="true" />
            <action type="Rewrite" 
                value="{R:2}" />


The above rule captures a 302 (redirect) response with Location response header contains an outbound URL (coming from the web app) that has signin-oidc in the path. When there are signin-oidc present into the path, it will match the regular expression and rewrite the Location header with the hostname that comes from your front-door/application gateway URL (i.e. The notion {R:2} preserves any following query parameters or sub path that was present in the original URL.

In order to understand the {R:2} syntax in depth, please read the back-references in Microsoft documentation.

The important bit from the document is quoted below:

Usage of back-references is the same regardless of which pattern syntax was used to capture them. Back-references can be used in the following locations within rewrite rules:

  • In condition input strings
  • In rule actions, specifically:
    • url attribute of Rewrite and Redirect action
    • statusLine and responseLine of a CustomResponse action
  • In a key parameter to the rewrite map

Back-references to condition patterns are identified by {C:N} where N is from 0 to 9. Back-references to rule patterns are identified by {R:N} where N is from 0 to 9. Note that for both types of back-references, {R:0} and {C:0}, will contain the matched string.

For example, in this pattern:


For the string: the back-references will be indexed as follows:

{C:0} -
{C:1} - www.
{C:2} -

Within a rule action, you can use the back-references to the rule pattern and to the last matched condition of that rule. Within a condition input string, you can use the back-references to the rule pattern and to the previously matched condition.

The following rule example demonstrates how back-references are created and referenced:

<rule name="Rewrite subdomain">
 <match url="^(.+)" /> <!-- rule back-reference is captured here -->
  <!-- condition back-reference is captured here -->
  <add input="{HTTP_HOST}" type="Pattern" pattern="^([^.]+)\.mysite\.com$" /> 
 <!-- rewrite action uses back-references to condition and 
      to rule when rewriting the url -->
 <action type="Rewrite" url="{C:1}/{R:1}" /> 

How to create and test these pattern (with RegEx)?

Check out this Microsoft Documentation how to use the Test pattern tool that comes with IIS installation.

How to create and test patterns

That’s about it. You can find an example web.config file (with complete configuration) in this GitHub repository.

Azure DevOps Security & Permissions REST API

Every Few months I notice the following Saga repeats. I face a challenge where I need to programmatically manage security aspects of Azure DevOps resources (like Repository, Pipeline, Environment etc.). I do lookup the Azure DevOps REST API documentation, realize that the Permissions & Security API’s are notoriously complicated and inadequately documented. So, I begin with F12 to kick off the Development tools for Browser and intercepting HTTP requests. Trying to guess what’s payloads are exchanged and try to come up with appropriate HTTP requests myself. However strange it might sound, usually this method works for me (actually worked almost all the time). But it’s a painful and time-consuming process. Recently I had to go through this process one more time and I promised to myself that once I am done, I will write a Blog post about it and put the code in a GitHub repository – so next time I will save myself some time & pain. That’s exactly what this post is all about.

Security & Permission REST API

As I have said, the security REST API is complicated and inadequately documented. Typically, each family of resources (work items, Git repositories, etc.) is secured using a different namespace. The first challenge is to find out the namespace IDs.

Then each security namespace contains zero or more access control lists (aka. ACLs). Each access control list contains a token, an inherit flag and a set of zero or more access control entries. Each access control entry contains an identity descriptor, an allowed permissions bitmask, and a denied permissions bitmask.

Tokens are arbitrary strings representing resources in Azure DevOps. Token format differs per resource type; however, hierarchy and separator characters are common between all tokens. Now, where do you find these tokens format? Well, I mostly find them by intercepting the Browser HTTP payloads. To save me from future efforts, I have created a .net Object model around the security namespace IDs, permissions and tokens – so when I consume those libraries, I can ignore these lower-level elements and have a higher order APIs to manage permissions. You can investigate the GitHub repository to learn about it. However, just to make it more fun to use, I have spent a bit time to create a Manifest file (Yes, stolen from Kubernetes world) where I can get my future job done only by writing YAML files – as oppose to .net/C# codes.

Instructions to use

The repository contains two projects (once is a Library – produced a DLL and another is the console executable application) and the console executable is named as azdoctl.exe.

The idea is to create a manifest file (yaml format) and apply the changes via the azdoctl.exe:

> azdoctl apply -f manifest.yaml
Manifest file

You need to create a manifest file to describe your Azure DevOps project and permissions. The format of the manifest file is in yaml (and idea is borrowed from Kubernetes manifest files.)


Here’s the schema of the manifest file:

apiVersion: apps/v1
kind: Project
  name: Bi-Team-Project
  description: Project for BI Engineering team
  name: Agile
  sourceControlType: Git

Manifest file starts with the team project name and description. Each manifest file can have only one team project definition.


Next, we can define teams for the project with following yaml block:

  - name: Bi-Core-Team
    description: The core team that run BI projects
      - name: Timothy Green
        id: 4ae3c851-6ef3-4748-bef9-4f809736d538
      - name: Linda
        id: 9c5918c7-ef03-4059-a49e-aa6e6d761423
        - name: 'UX Specialists'
          id: a2931c86-e975-4220-aa89-dc3f952290f4
        - name: Timothy Green
          id: 4ae3c851-6ef3-4748-bef9-4f809736d538
        - name: Linda
          id: 9c5918c7-ef03-4059-a49e-aa6e6d761423

Here we can create teams and assign admins and members to them. All the references (name and ids) must be valid in Azure Active Directory. Ids are Object ID for group or users in Azure Active directory.


Next, we can define the repository – that must be created and assigned permissions to.

  - name: Sample-Git-Repository
      - group: 'Data-Scientists'
        origin: aad
          - GenericRead
          - GenericContribute
          - CreateBranch
          - PullRequestContribute
      - group: 'BI-Scrum-masters'
        origin: aad
          - GenericRead
          - GenericContribute
          - CreateBranch
          - PullRequestContribute
          - PolicyExempt

Again, you can apply an Azure AD group with very fine-grained permissions to each repository that you want to create.

List of all the allowed permissions:



You can create environments and assign permissions to them with following yaml block.

  - name: Development-Environment
    description: 'Deployment environment for Developers'
      - group: 'Bi-Developers'
        origin: aad
          - Administrator
  - name: Production-Environment
    description: 'Deployment environment for Production'
      - group: 'Bi-Developers'
        origin: aad
          - User        

Build and Release (pipeline) folders

You can also create Folders for build and release pipelines and apply specific permission during bootstrap. That way teams can have fine grained permissions into these folders.

Build Pipeline Folders

Here’s the snippet for creating build folders.

  - path: '/Bi-Application-Builds'
      - group: 'Bi-Developers'
        origin: aad
          - ViewBuilds
          - QueueBuilds
          - StopBuilds
          - ViewBuildDefinition
          - EditBuildDefinition
          - DeleteBuilds

And, for the release pipelines:

  - path: '/Bi-Application-Relases'
      - group: 'Bi-Developers'
        origin: aad
          - ViewReleaseDefinition
          - EditReleaseDefinition
          - ViewReleases
          - CreateReleases
          - EditReleaseEnvironment
          - DeleteReleaseEnvironment
          - ManageDeployments

Once you have the yaml file defined, you can apply it as described above.


That’s it for today. By the way,

The code is provided as-is, with MIT license. You can use it, replicate it, modify it as much as you wish. I would appreciate if you acknowledged the usefulness, but that’s not enforced. You are free to use it anyway you want.

And, that also means, the author is not taking any responsibility to provide any guarantee or such.


Key Vault as backing store of Azure Functions

If you have used Azure function, you probably are aware that Azure Functions leverages a Storage Account underneath to support the file storage (where the function app code resides as Azure File share) and also as a backing store to keep Functions Keys (the secrets that are used in Function invocations).


Figure: Storage Account containers – “azure-webjobs-secrets”

If you look inside the container there are files with following contents:


Figure: These JSON files has the function keys


Figure: Encrypted master keys  and other function keys

I have been in a conversation where; it was not appreciated to see the keys stored in the storage account. The security and governance team was seeking for a better place to keep these keys. Where secrets can be further restricted from developer access.

Of course, we can create a VNET around the storage accountand use private link but that has some other consequence as the content (functions implementations artifacts) stored also into the storage account. Configuring two separate storage account can address this better, however, this can make the setup complicated than it has to be.
A better option could be to store this keys into a Key Vault as backing store – which is a great feature of Azure functions, but I’ve found few people are aware of this due to lack of documentations. In this article I will show you how to move these secrets to a Key Vault.

To do so, we need to configure few Application Settings into the Function App. They are given below:

App Settings name Value
AzureWebJobsSecretStorageType keyvault
AzureWebJobsSecretStorageKeyVaultName <Key Vault Name>
AzureWebJobsSecretStorageKeyVaultConnectionString <Connection String or Leave it empty with Managed Identity configured on Azure Functions>

Once you have configured the above settings, you need to enable Managed Identity on your Azure Function. You will have to accomplish that in Identity section under platform features tab. That is a much better option in my opinion as we don’t need to maintain any more secrets to talk to Key vault securely. Go ahead and turn the system identity toggle on. This will create a service principal with the same name as Azure Function application you have.


Figure: Enabling system assigned managed identity on Function app
Next step is to add a rule to the key vault’s access policies for the service principal created in earlier step.

access policyu

Figure: Key vault Access policy
That’s it, hit your function app now and you will see the keys are stored inside the Key vault. You can safely delete the container from the storage account now.


Figure: Secrets are stored in Key Vault

Hope this will save time when you are concerned to keep the keys in storage account.
The Azure Function is open sourced and is in GitHub. You can have a look into the sources and see other interesting ideas that you may play with.

Continuously deploy Blazor SPA to Azure Storage static web site

Lately I am learning Blazor – the relatively new UI framework from Microsoft. Blazor is just awesome – the ability to write c# code both in server and client side is extremely productive for .net developers. From Blazor documentations:

Blazor lets you build interactive web UIs using C# instead of JavaScript. Blazor apps are composed of reusable web UI components implemented using C#, HTML, and CSS. Both client and server code is written in C#, allowing you to share code and libraries.

I wanted to write a simple SPA (Single Page Application) and run it as server-less. Azure Storage offers hosting static web sites for quite a while now. Which seems like a very nice option to run a Blazor SPA which executes into the user’s browser (within the same Sandbox as JavaScript does). It also a cheap way to run a Single Page application in Cloud.

I am using GitHub as my source repository for free (in a private repository). Today wanted to create a pipeline that will continuously deploy my Blazor app to the storage account. Azure Pipelines seems to have pretty nice integration with GitHub and it’s has a free tier as well . If either our GitHub repository or pipeline is private, Azure Pipeline still provide a free tier. In this tier, one can run one free parallel job that can run up to 60 minutes each time until we’ve used 1800 minutes per month. That’s pretty darn good for my use case.

I also wanted to build the project many times in my local machine (while developing) in the same way it gets built in the pipeline. Being a docker fan myself, that’s quite no-brainer. Let’s get started.


I have performed few steps before I ran after the pipeline – that are beyond the scope of this post.

  • I have created an Azure Subscription
  • Provisioned resource groups and storage account
  • I have created a Service Principal and granted Contributor role to the storage account

Publishing in Docker

I have created a docker file that will build the app, run unit tests and if all goes well, it will publish the app in a folder. All of these are standard dotnet commands.
Once, we have the application published in a folder, I have taken the content of that folder to a Azure CLI docker base image (where CLI is pre-installed) and thrown away the rest of the intermediate containers.

Here’s our docker file:

FROM AS base
# Build the app
FROM AS build
COPY . .
WORKDIR "/src/BlazorApp"
RUN dotnet restore "BlazorApp.csproj"
RUN dotnet build "BlazorApp.csproj" -c Release -o /app
# Run the unit tests
RUN dotnet restore "BlazorAppTest.csproj"
RUN dotnet build "BlazorAppTest.csproj"
RUN dotnet test "BlazorAppTest.csproj"
# Publish the app
FROM build AS publish
RUN dotnet publish "BlazorApp.csproj" -c Release -o /app
FROM microsoft/azure-cli AS final
ARG AppSecret
ARG TenantID
ARG StorageAccountName
COPY –from=publish /app .
WORKDIR "/app/BlazorApp/dist"
RUN az login –service-principal –username $AppID –password $AppSecret –tenant $TenantID
RUN az storage blob delete-batch –account-name $StorageAccountName –source "\$web"
RUN az storage blob upload-batch –account-name $StorageAccountName -s . -d "\$web"

The docker file expects few arguments (basically the service principal ID, the password of the service principal and the Azure AD tenant ID – these are required for Azure CLI to sign-in to my Azure subscription). Here’s how we can build this image now:

docker build
-f Blazor.Dockerfile
–build-arg StorageAccountName=<Storage-Account-Name>
–build-arg AppID=<APP GUID>
–build-arg AppSecret=<PASSWORD>
–build-arg TenantID=<Azure AD TENANT ID>
-t blazor-app .

view raw
hosted with ❤ by GitHub

Azure Pipeline as code

We have now the container, time to run it every time a commit has been made to the GitHub repository. Azure Pipeline has a yaml format to define pipeline-as-code – which is another neat feature of Azure Pipelines.

Let’s see how the pipeline-as-code looks like:

name: Hosted Ubuntu 1604
task: Docker@0
displayName: 'Build and release Blazor to Storage Account'
dockerFile: src/Blazor.Dockerfile
buildArguments: |

view raw
hosted with ❤ by GitHub

I have committed this to the same repository into the root folder.

Creating the pipeline

We need to login to Azure DevOps and create a project (if there’s none). From the build option we can create a new build definition.


The steps to create build definition is very straightforward. It allows us to directly point to a GitHub repository that we want to build.
Almost there. We need to supply the service principal ID, password, tenant ID and storage account names to this pipeline – because both our docker file and the pipeline-as-code expected them as dependencies. However, we can’t just put their values and commit them to GitHub. They should be kept secret.

Azure Pipeline Secret variables

Azure Pipeline allows us to define secret variable for a pipeline. We need to open the build definition in “edit” mode and then go to the top-right ellipses button as below:


Now we can define the values of these secret and keep them hidden (there’s a lock icon there).


That’s all what we need. Ready to deploy the code to Azure storage via this pipeline. If we now go an make a change in our repository it will trigger the pipeline and sure enough it will build-test-publish-deploy to Azure storage as a Static SPA.

Thanks for reading!

It’s harder to read code than to write it

When I started writing code for commercial projects ten years back (around 2003), I’ve learned an important lesson. I was assigned to write a function that serializes a data structure into XML string and send that in a SOAP body. I was very fast writing the module that mostly uses concatenation of BSTR objects in Visual C++ 6.0 in XML tags. But my mentor at that time was not happy when we were doing a review on that function. He told me to use the existing library functions (MSXML::IXMLDOMDocument2Ptr and DOMDocument6 etc) to do the job. I had no clue at that time why he was saying so. I never worked before with MSXML at that time. It was easy for me to write it with BSTR rather than reading the MSXML APIs for hours and going through all the hassles of it. I was really annoyed with this.

I am still not judging if he was right or wrong, but one thing for sure I learned from him (later when I spent more time in my profession) is that, I should have learned what MSXML is capable of and how to use that. And probably after I did so, I might actually would used that library instead of writing a new one.

Writing new code apparently sounds, looks easy. But it has a cost associated. It has to be maintained. More people need to be aware about this code. This is especially true when it comes to write something that is already released. One may get an escape re-writing something which was never shipped. But one should always think twice re-writing something that is released. It may nasty, hard to read, but it’s tested, bugs were found and fixed and it has those knowledge embedded into it. Rewriting, often comes with high chances that it will reintroduce some new set of bugs that will have to be fixed and maintained. Since that early lesson learned, I have been through many situations where I felt, rewriting is the easiest solution that comes first in mind. But if the old code was released I really push myself to think and reconsider, if I really need a rewrite.

The issue sadly exists in a larger scale as well. When it comes to architect new solution, the same philosophy kicks in, it feels more comfortable to rewrite a new solution entirely rather than assemble the exisitng product modules and bring them gradually into the new platform. I think the culprit is the same in both cases. It’s unwillingness to read and understand the existing product modules what drives us to think recreating the solution is the best way to go.

Fundamentally, I feel it’s an issue of reading vs writing codes. I summed up all the events, I have experienced, I should rewrite, I have realized that almost in every instances, I was reluctant to read the exisitng codes. Which led me to the direction to think of a rewrite. This may feel like the right thing to do when I think as an individual, as a programmer. But it’s hell wrong when I evaluate the decision from the organization perspective. It almost never gives a ROI.

I could never explain this better than the way Joel explained it before in his blog:


“Netscape 6.0 is finally going into its first public beta. There never was a version 5.0. The last major release, version 4.0, was released almost three years ago. Three years is an awfully long time in the Internet world. During this time, Netscape sat by, helplessly, as their market share plummeted. It’s a bit smarmy of me to criticize them for waiting so long between releases. They didn’t do it on purpose, now, did they? Well, yes. They did. They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.”


Very recently I have had a circumstance where, I could (and honestly I felt) to rewrite the software because it’s using socket IO and manual Xml based messaging on it. But I refrained myself to stop thinking in that direction that apparently looks catchy. Like Joel wrote:


We’re programmers. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We’re not excited by incremental renovation: tinkering, improving, planting flower beds.


Being in the architect position I had the privilege to set the direction. I had to convince lot of people/stakeholders that I don’t feel we should rewrite this from day one. Instead we should take a pragmatic approach to seal the existing code into modules and interface that with more sophisticated technologies and gradually remove them form the stack. I am still unsure if the direction will bring success to us, but I am certain that the chances are much higher than the other way around.

Custom SPGridView

Recently I had to create a custom Grid control that allows user to do grouping, sorting, paging and filtering data displaying in it. I spent few days to figure out if there are third party controls (like Xceed, Infragistics etc) that can meet my requirements. Sadly I found that these controls can’t fulfill what I wanted to do.

Well, my requirements were as follows

1. The grid should be used to display a *really large* amount of rows. So fetching every rows from Database will be killing the Application simply.
2. The grid should allow doing sorting at the database level, then the Database indexes can be used for efficiency.
3. The filtering also should be performed by the Database engine.
4. It should allow user to group data in it. As you probably understand that the grouping should also be done by the Database engine.
5. It should offer the similar look-and-feel of native SharePoint 2007 lists.

Now when I tried to find a control (commercial or free) that offers these features, I had no other options but to be disappointed.

Almost all of the controls that I’ve visited offers all these features but they ask for the entire DataTable in order to bind it. Which is simply not possible for my purpose. Even if I would have used the ObjectDataProvider still I couldn’t do the grouping at the Database end.

Interestingly the SPGridView control (shipped by Microsoft) also doesn’t allow multiple groupings and in grouped scenarios, the other features like, Filtering, Paging, Sorting doesn’t work properly.

Therefore, I have created a custom Grid Control that makes me happy. And eventually I did it. It’s not that difficult. I did a complete UI rendition inside the Grid control. Provided an Interface that should be implemented by the client application to provide the data.

It’s working for me now. Still I am doing some QC to make sure it’s decent enough. After that I’m gonna provide the source of that control. But for now let’s have a look onto the video where the Grid control can be seen in Action!

Stay tuned!

Parallel Extensions of .NET 4.0

Last night, I was playing around with some cool new features of .net framework 4.0, which a CTP released as a VPC, can be downloaded from here.

There are many new stuffs Microsoft planned to release with Visual Studio 10. Parallel Extension is one of them. Parallel extension is a set of APIs that is embedded under the System namespace and inside mscorlib assembly. Therefore, programmers do not need to use a reference from any other assembly to get benefit of this cool feature. Rather they will get this out of the box.

Well, what is Parallel Extensions all about?

Microsoft’s vision is, for multithreaded application, developers need to focus on many issues, regarding managing the threads, scalability and so on. Parallel extensions are an attempt to allow developers to focus onto the core business functionality, rather the thread management stuffs. It provides some cool way to manage concurrent applications. Parallel extensions mainly comprises into three major areas, the first one is the Task Parallel library. There is a class Task which the developer should worry about. They will not bother about the threads; rather they will consider that they are writing tasks. And the framework will execute those tasks in a parallel mode. The next major area is called PLIQ, which is basically a LINQ to Objects that operates in parallel mode. And the third one is Coordination data structure.

Let’s have some code snippet to get a brief idea about this.

We will use a simple console application, and see the solution explorer we are not using any assemblies as opposed to the defaults.

So parallel extensions do not require any special libraries!

The above code does, takes an input integer and doubles that, and finally finds the prime numbers from zero to that extent. Well, this is nothing quite useful, but enough to demonstrate an application. This method also writes the executed thread ID into the console window.

Now, let’s first create few threads to execute our above written method. We will create 10 threads to execute the methods simultaneously.

Here things to notice that, in this way, we have the thread instance under our control, so we can invoke methods like, Join(), Abort() etc. but, developer is responsible to manage threads by their own. The code produces following outputs.

See, we have actually 10 different threads generated to execute this. Now, let’s use the Thread Pool thread for the same business.

This generates the output like following.

Look, it is using the same thread (6) for all the work items. The .net thread pool using thread objects effectively. But in this way, we lost the control that we had into the previous snippet. Like, now we can’t cancel a certain thread directly, because we actually don’t know which thread is going to execute the work items.

Now, let’s have a view into the cool parallel extensions Task class. It’s pretty much like the Thread implementations, and allows all the methods like, Wait(), CancelAndWait() etc to interact with the task. In addition, it will take advantage of the execution environment. That means, if you run this application into a Multi core processor, it will spawn more threads to carry out the tasks. Though, as I am using this into the VPC with a single core CPU, it’s using one thread instance to carry out the tasks. But now this is not my headache, to manage threads, or even thinking about it. All these concerns are taken care of by the Parallel Framework. Cool!

This generates the same output like Thread Pool snippet, but that is only because I have used it into a VPC. On a multicore machine, it will generate more threads to get the optimal performance.

Parallel Static class

Well, this is even more interesting. It offers few iterative methods that automatically executes each iterations as a task and of course in parallel. Isn’t it a cool one?

I hope. I will explain PLINQ in my next post. Happy programming!

Extension Methods

.NET 3.o provides the feature named “Extension methods”, which is used drastically by the LINQ library. For example, the Enumerable class of System.Linq namespace declares a whole bunch of static extension methods that allows user to write Linq enabled smart looking methods on any IEnumerable instance.
For instance

Generates output like following

Here, we are using the Where method which is basically an extension method for any IEnumerable instance. The extension methods along with the Lambda expression (which is another new feature of .NET 3.0), allows us to write very verbose filter code like snippet showed above.
So, what is the Extension method?
According to the MSDN,
Extension methods enable you to “add” methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type. Extension methods are a special kind of static method, but they are called as if they were instance methods on the extended type. For client code written in C# and Visual Basic, there is no apparent difference between calling an extension method and the methods that are actually defined in a type.
I personally like this feature very much. Along with the LINQ related usage Extension methods can be very handy in some other cases.
Consider a scenario, where I have an interface that has a method with three arguments.

Now at some point, I found that it would be better to provide an overload of this method where the last argument will not present, the implementation of the interface will pass true as the default value of indent.

Now, if I do so, each of the implementers of this interface need to implement the handy overloaded version and need provide the default value true. But this seems a burden that we could take away from the implementers. Also, there are chances that somebody will implement this second method and pass false mistakenly as a default.
We can resolve this issue very neat way using extension method. Consider the following snippet.

See, the interface only contains one version of the method, implementers are also not bothered at all about the overloading version and the default value jargons. But the consumer of the interface still consuming this as this is a part of the interface, only they need to import the namespace where the extension method declared. Even the Visual Studio is also providing the result intellisense support like a regular overloaded scenario. Isn’t it nice?
Internally, what is happening? Well, this is basically a syntactical sugar, not more than that. The compiler actually generates the regular static method calls for the extension methods. Therefore, compiler actually interprets that syntax as following
So this is a compile time stuffs, during runtime, it’s nothing different from a regular static method invocation.
Like C# Visual basic also supports extension methods. But there is an exception though. Don’t use the extension method invocation syntax for any extension method that written for System.Object class . Because VB consider the System.Object class differently, and it will not generate this actual static method invocation syntax during compile time. And what will happen actually is, during runtime it will raise an exception. So be aware about it.
This feature is really a great one among the other features of .NET 3.0, we can now write some common boiler-plate codes as an extension method in an enterprise solutions. For instance, methods like, ArgumentHelper.ThrowExceptionIfNull(), String.IsNullOrEmpty() can be written with extension methods and can be used in a very handy way.

Big power has big responsibilities.

As this offer you a lot of power to write methods for any type, you need to remain aware that you are not writing unnecessary extension methods which can make other confused. Such as writing a lot of extension methods for System.Object is definitely not a good idea.
I’m expecting something called “Extension properties” which could be another good thing. I think, it should not be a difficult one, cause internally .NET properties are basically nothing but methods. Hope Microsoft will ship “Extension properties” in future version of .NET framework.
Happy programming!

How to remove SharePoint context menus selectively

I need to figure out how I could I selectively remove some Standard SharePoint list context menu. For example, most of the list context menus contain Edit Item, Delete Item etc. assume I have to keep the delete menu but need to strike out the “Edit Item”. How can we do that?


Go to the page settings. Add a new content editor web part into he page and go to the settings of this content editor web part. Open the source editor. Put the following scripts on it.

function Custom_AddListMenuItems(m, ctx)


var strDelete=”Delete this Item”;

var imgDelete=”;

var strDeleteAction=”deleteThisSelectedListItem();” ;

CAMOpt(m, strDelete, strDeleteAction, imgDelete);

// add a separator to the menu


// false means that the standard menu items should also rendered

return true;


function deleteThisSelectedListItem()


if (! IsContextSet())


    var ctx=currentCtx;

    var ciid=currentItemID;

    if (confirm(ctx.RecycleBinEnabled ? L_STSRecycleConfirm_Text : L_STSDelConfirm_Text))


        SubmitFormPost(ctx.HttpPath+”&Cmd=Delete&List=”+ctx.listName+                    “&ID=”+ciid+”&NextUsing=”+GetSource());




Finally make the content editor web part invisible. Voila!

Posting client side data to server side in ASP.NET AJAX

Often we need to bring some client side data (e.g. javascript variable’s value) into the server side for processing. Doing this usually done using hidden fields- registering hidden fields from server and modifying the values at client side using javascript and finally bringing back the modified value to the server along with a post back. I was going to do this same task from within an ASP.NET Ajax application. And I found that if using a handler for client side add_beginRequest is not sufficient to accomplish this task. The reason is the begin request is fired by the AJAX after preparing the request object. So changing the value inside this method will not reflect the value at server side.

Here is the way how we can resolve this problem. First of all we are going to register a hidden field from the server side

void Page_Load(object sender, EventArgs e)


if (!Page.IsPostBack){

ScriptManager.RegisterHiddenField(UpdatePanel1, “HiddenField1”, “”);



Now at the client side, we need to register a handler for the begin_request event of ASP.NET AJAX client side script manager.




Now, we need to modify the request object (inserting the data that we need to bring at the server end) just before it gets posted into the server. Here is how we can do it.

function onBeginRequest(sender, args)


var request = args.get_request();

var body = request.get_body();

var token = ‘&HiddenField1=’;

body = body.replace(token, token + document.getElementById(‘someElementID’).value);



Here we are opening the request object and modifying the request body by inserting the value found at a text input element.

Now at server end you will find this value

void Page_Load(object sender, EventArgs e)


if (!Page.IsPostBack) {

ScriptManager.RegisterHiddenField(UpdatePanel1, “HiddenField1”, “”);


else {

string etst = Request[“HiddenField1”]; // reading the value here!


That’s it!