It’s harder to read code than to write it

When I started writing code for commercial projects ten years back (around 2003), I’ve learned an important lesson. I was assigned to write a function that serializes a data structure into XML string and send that in a SOAP body. I was very fast writing the module that mostly uses concatenation of BSTR objects in Visual C++ 6.0 in XML tags. But my mentor at that time was not happy when we were doing a review on that function. He told me to use the existing library functions (MSXML::IXMLDOMDocument2Ptr and DOMDocument6 etc) to do the job. I had no clue at that time why he was saying so. I never worked before with MSXML at that time. It was easy for me to write it with BSTR rather than reading the MSXML APIs for hours and going through all the hassles of it. I was really annoyed with this.

I am still not judging if he was right or wrong, but one thing for sure I learned from him (later when I spent more time in my profession) is that, I should have learned what MSXML is capable of and how to use that. And probably after I did so, I might actually would used that library instead of writing a new one.

Writing new code apparently sounds, looks easy. But it has a cost associated. It has to be maintained. More people need to be aware about this code. This is especially true when it comes to write something that is already released. One may get an escape re-writing something which was never shipped. But one should always think twice re-writing something that is released. It may nasty, hard to read, but it’s tested, bugs were found and fixed and it has those knowledge embedded into it. Rewriting, often comes with high chances that it will reintroduce some new set of bugs that will have to be fixed and maintained. Since that early lesson learned, I have been through many situations where I felt, rewriting is the easiest solution that comes first in mind. But if the old code was released I really push myself to think and reconsider, if I really need a rewrite.

The issue sadly exists in a larger scale as well. When it comes to architect new solution, the same philosophy kicks in, it feels more comfortable to rewrite a new solution entirely rather than assemble the exisitng product modules and bring them gradually into the new platform. I think the culprit is the same in both cases. It’s unwillingness to read and understand the existing product modules what drives us to think recreating the solution is the best way to go.

Fundamentally, I feel it’s an issue of reading vs writing codes. I summed up all the events, I have experienced, I should rewrite, I have realized that almost in every instances, I was reluctant to read the exisitng codes. Which led me to the direction to think of a rewrite. This may feel like the right thing to do when I think as an individual, as a programmer. But it’s hell wrong when I evaluate the decision from the organization perspective. It almost never gives a ROI.

I could never explain this better than the way Joel explained it before in his blog:


“Netscape 6.0 is finally going into its first public beta. There never was a version 5.0. The last major release, version 4.0, was released almost three years ago. Three years is an awfully long time in the Internet world. During this time, Netscape sat by, helplessly, as their market share plummeted. It’s a bit smarmy of me to criticize them for waiting so long between releases. They didn’t do it on purpose, now, did they? Well, yes. They did. They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.”


Very recently I have had a circumstance where, I could (and honestly I felt) to rewrite the software because it’s using socket IO and manual Xml based messaging on it. But I refrained myself to stop thinking in that direction that apparently looks catchy. Like Joel wrote:


We’re programmers. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We’re not excited by incremental renovation: tinkering, improving, planting flower beds.


Being in the architect position I had the privilege to set the direction. I had to convince lot of people/stakeholders that I don’t feel we should rewrite this from day one. Instead we should take a pragmatic approach to seal the existing code into modules and interface that with more sophisticated technologies and gradually remove them form the stack. I am still unsure if the direction will bring success to us, but I am certain that the chances are much higher than the other way around.

Quick and easy self-hosted WCF services

I realized that I am not writing blogs for a long time. I feel bad about that, this post is an attempt to get out of the laziness.

I often find myself writing console applications that have a simple WCF service and a client that invokes that to check different stuffs. Most of the time, I want to have a quick service that is hosted using either NetTcpBinding or WsHttpBinding with very basic configurations. Which triggers the urge writing a bootstrap mechanism to easily write and host WCF services and consume them at ease. I am planning to extend the implementation into a more richer one gradually, but I have something already to do a decent kick off. Here’s how it works.

Step 1 : Creating the contract

You need to create a class library where you can have your contract interfaces for the service you are planning to write. Something like following

[ServiceContract(Namespace = "")]
public interface IWcf
string Greet(string name);

Now you need to copy the WcfService.cs file into the same project. This file contain one big class named WcfService. That has the public methods to host services and also creating client proxies to invoke them. The class can be downloaded from this Git ( repository. Once you have it added into your project, go to step 2.

Step 2 : Creating Console Server project.

Create a console application that will host the service. Add a reference to the project created in step 1. Define your service implementation class as follows

// Sample service
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class MyService : IWcf
public string Greet(string name)
return DateTime.Now.ToString() + name;

Finally modify the program.cs to have something like following

class Program
static void Main(string[] args)
// use WcfService.Tcp for NetTcp binding or WcfService.Http for WSHttpBinding

var hosts = WcfService.DefaultFactory.CreateServers(
new List { typeof(MyService) },
(t) => { return t.Name; },
(t) => { return typeof(IWcf); },
(sender, exception) => { Trace.Write(exception); },
(msg) => { Trace.Write(msg); },
(msg) => { Trace.Write(msg); },
(msg) => { Trace.Write(msg); });

Console.WriteLine("Server started....");
catch (Exception ex)

At this point you should be able to hit F5 and run the server console program.

Step 3 : Creating Console client project

Create another console application and modify the program.cs to something like following

class Program
static void Main(string[] args)
// use WcfService.Tcp for NetTcp binding or WcfService.Http for WSHttpBinding

using (var wcf =
WcfService.DefaultFactory.CreateChannel(Environment.MachineName, 8789, (t) => { return "MyService"; }, "WcfServices"))
var result = wcf.Client.Greet("Moim");

catch (Exception ex)

You are good to go! Hope this helps somebody (at least myself).

Prompt for Save changes in MMC 3.0 Application (C#)

Microsoft Management Console 3.0 is a managed platform to write and host application inside the standard Windows configuration console. It provides a simple, consistent and integrated management user interface and administrative console. In one of the product I am currently working with uses this MMC SDK. We used it to develop the configuration panel of administrative purposes.

That’s the quick background description of this post, however, this post is not meant for a guy who never worked with MMC SDK. I am assuming you already know all the basics about it.

In our application it has quite a number of ScopeNodes and each of them has associated details pages (in MMC terminology they are View, in our case most of them are FormView ). All of them have application data rendered in a WinForm’s UserControl. We allow user to modify those configuration data in place.
But the problem begins when user move to a different scope node and then close the MMC console window. Now the changes the user made earlier are not saved. As you already know that MMC console is an MDI application and the scope nodes are completely isolated from each others. Therefore, you can’t prompt user to save or discard the pending changes.

I googled a lot to get a solution for this, but ended up with a myriad of frustrations. Many people also faced the same problem and later they implemented with a pop-up dialog, so that they can handle the full lifecycle of saving functionalities. But that causes a lot of development works when you have too many scope nodes. You need to define 2 forms for each of them. One for the read-only display, another is a pop-up to edit the data. For my product, it was not viable really. In fact, it’s even looks nasty to get a pop-up for each node configuration.
Anyway, I finally resolved my problem by myself. It’s not a very good way to settle this issue, but it works superb for my purpose. So here is what I did to fix this problem. I have a class that will intercept the Windows messages and will take action while user is trying to close the main Console.

/// Subclassing the main window handle's WndProc method to intercept
/// the close event
internal class SubClassHWND : NativeWindow
private const int WM_CLOSE = 0x10;
private MySnapIn _snapIn; // MySnapIn class is derived from SnapIn (MMC SDK)
private List _childNativeWindows;

// Constructs a new instance of this class
internal SubClassHWND(MySnapIn snapIn)
this._snapIn = snapIn;
this._childNativeWindows = new List();

// Starts the hook process
internal void StartHook()
// get the handle
var handle = Process.GetCurrentProcess().MainWindowHandle;

if (handle != null && handle.ToInt32() > 0)
// assign it now
// get the childrens
foreach (var childHandle in GetChildWindows(handle))
var childSubClass = new SubClassHWND(this._snapIn);

// assign this

// keep the instance alive

// The overriden windows procedure
protected override void WndProc(ref Message m)
if (_snapIn != null && m.Msg == WM_CLOSE)
{ // if we have a valid snapin instance
if (!_snapIn.CanCloseSnapIn(this.Handle))
{ // if we can close
return; // don't close this then
// delegate the message to the chain
base.WndProc(ref m);

// Requests the handle to close the window
internal static void RequestClose(IntPtr hwnd)
SendMessage(hwnd.ToInt32(), WM_CLOSE, 0, 0);

// Send a Windows Message
public static extern int SendMessage(int hWnd,
int Msg,
int wParam,
int lParam);

[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool EnumChildWindows(IntPtr window, EnumWindowProc callback, IntPtr i);

public static List GetChildWindows(IntPtr parent)
List result = new List();
GCHandle listHandle = GCHandle.Alloc(result);
EnumWindowProc childProc = new EnumWindowProc(EnumWindow);
EnumChildWindows(parent, childProc, GCHandle.ToIntPtr(listHandle));
if (listHandle.IsAllocated)
return result;

private static bool EnumWindow(IntPtr handle, IntPtr pointer)
GCHandle gch = GCHandle.FromIntPtr(pointer);
List list = gch.Target as List;
if (list == null)
throw new InvalidCastException("GCHandle Target could not be cast as List");
// You can modify this to check to see if you want to cancel the operation, then return a null here
return true;

public delegate bool EnumWindowProc(IntPtr hWnd, IntPtr parameter);

This class has the “WndProc” – windows message pump (or dispatcher method) method that will receive all the messages that are sent to the MMC console main window. Also this class will set message hook to all the child windows hosted inside the MMC MDI window.

Now we will only invoke the StartHook method from the SnapIn to active this interception hook.

// The subclassed SnapIn for my application
public class MySnapIn : SnapIn
internal SubClassHWND SubClassHWND
private set;

protected MySnapIn()
// create the subclassing support
SubClassHWND = new SubClassHWND(this);

// Start the hook now
protected override void OnInitialize()

Now we have the option to do something before closing the application, like prompting with a Yes, No and Cancel dialog like NotePad does for a dirty file.

// Determins if the snapin can be closed now or not
internal bool CanCloseSnapIn(IntPtr requestWindow)
if (IsDirty)
{ // found a node dirty, ask user if we can
// close this dialog or not
this.BeginInvoke(new System.Action(() =>
using (var dlg = new SnapInCloseWarningDialog())
var dlgRes = Console.ShowDialog(dlg);

switch (dlgRes)
case DialogResult.Yes:
SaveDirtyData(); // save them here
IsDirty = false; // set to false, so next
// time the method
// will not prevent
// closing the application
case DialogResult.No:
IsDirty = false;
case DialogResult.Cancel: break;// Do nothing
return false;
return true;

One small problem remains though. The dispatcher method gets the WM_CLOSE in a thread that can’t display a Window due to the fact that the current thread is not really a GUI thread. So we have to do a tricky solution there. We need to display the prompt by using a delegate (using BeginInvoke) and discard the current WM_CLOSE message that we intercepted already.
Later when a choice has been made (user selected yes, no or cancel), if they selected ‘Yes’ then we have to close the application after saving the data. If ‘no’ selected we will have to close the SnapIn as well. Only for ‘Cancel’ we don’t have to do anything. So only thing is critical is how we can close this window again. Here is how we can do that:

Notice that SnapIn’s CanCloseSnapIn method does have a parameter which is the pointer (an instance of IntPtr in this case) of the window handle that has been closed by the user. This has been done on purpose. This will offer the possiblity to send a WM_CLOSE again to that same window. So even if user closes the MDI child it will only close the child window only after save- which is just perfect!

Hope this will help somebody struggling with the same gotcha.

Custom SPGridView

Recently I had to create a custom Grid control that allows user to do grouping, sorting, paging and filtering data displaying in it. I spent few days to figure out if there are third party controls (like Xceed, Infragistics etc) that can meet my requirements. Sadly I found that these controls can’t fulfill what I wanted to do.

Well, my requirements were as follows

1. The grid should be used to display a *really large* amount of rows. So fetching every rows from Database will be killing the Application simply.
2. The grid should allow doing sorting at the database level, then the Database indexes can be used for efficiency.
3. The filtering also should be performed by the Database engine.
4. It should allow user to group data in it. As you probably understand that the grouping should also be done by the Database engine.
5. It should offer the similar look-and-feel of native SharePoint 2007 lists.

Now when I tried to find a control (commercial or free) that offers these features, I had no other options but to be disappointed.

Almost all of the controls that I’ve visited offers all these features but they ask for the entire DataTable in order to bind it. Which is simply not possible for my purpose. Even if I would have used the ObjectDataProvider still I couldn’t do the grouping at the Database end.

Interestingly the SPGridView control (shipped by Microsoft) also doesn’t allow multiple groupings and in grouped scenarios, the other features like, Filtering, Paging, Sorting doesn’t work properly.

Therefore, I have created a custom Grid Control that makes me happy. And eventually I did it. It’s not that difficult. I did a complete UI rendition inside the Grid control. Provided an Interface that should be implemented by the client application to provide the data.

It’s working for me now. Still I am doing some QC to make sure it’s decent enough. After that I’m gonna provide the source of that control. But for now let’s have a look onto the video where the Grid control can be seen in Action!

Stay tuned!

jQuery Autocomplete inside partially rendered user control?

Yesterday a very small issue put me into the hell for almost an hour. Finally I was able to figure out a workaround for me.

I am not claiming this is the best approach to solve this problem or stuff like that, but definitely it worked for me nice, and I did not find any downside of this yet.

Well, let’s get into the story, I was using the awesome jQuery Autocomplete plug-in inside my MVC application. Initially I put this autocomplete textbox into the site.master page and it worked superb! I used the HtmlHelper extension model to accomplish my task. John Zablocki has written a wonderful post regarding this approach. I basically used his approach with subtle modifications. For those who has no idea what jQuery Autocomplete plugin is, it’s a regular html text input control that is registered by the jQuery autocomplete plugin library and when user tries to write something in it, it made an ajax request to the server to get the possible values for the given characters use has been typed. Again, please read the John’s blog for a good idea about this.

My problem begins when I moved this control from the master page and put inside a view user control in my MVC application- that renders partially. To explain this more clearly, my user control has the following code

Now, from the Index.aspx (I am talking about the Home\Index.aspx -which generates for a hello world MVC application), I am rendering this user control using AJAX. And I noticed that the jQuery is not working. Why??

Well, after spending few minutes I understood that, the script that used to register this control with jQuery is not executing when the DOM is modified using AJAX. Which means that if you write a simple ascx control, with only the following script code,


And render this ascx using Ajax. You will not see anything where I was expecting to see the message box saying “Hi” to me. Well, when you modify the DOM dynamically if there are any scripts that are not executed automatically, unless you do something to force this. This is the reason why my jQuery plugin was not working. So what I did is, forced the browser to load all the scripts that returned by the server during the AJAX call. And I loaded the javascript in window level- it actually depends on you-where it bests suit you.

So I was able to make this running-with a OnSuccess event handler for my AJAX call, and once it finished the AJAX invocation, I did a force load of all the scripts block resides into the ascx. Voila!

Parallel Extensions of .NET 4.0

Last night, I was playing around with some cool new features of .net framework 4.0, which a CTP released as a VPC, can be downloaded from here.

There are many new stuffs Microsoft planned to release with Visual Studio 10. Parallel Extension is one of them. Parallel extension is a set of APIs that is embedded under the System namespace and inside mscorlib assembly. Therefore, programmers do not need to use a reference from any other assembly to get benefit of this cool feature. Rather they will get this out of the box.

Well, what is Parallel Extensions all about?

Microsoft’s vision is, for multithreaded application, developers need to focus on many issues, regarding managing the threads, scalability and so on. Parallel extensions are an attempt to allow developers to focus onto the core business functionality, rather the thread management stuffs. It provides some cool way to manage concurrent applications. Parallel extensions mainly comprises into three major areas, the first one is the Task Parallel library. There is a class Task which the developer should worry about. They will not bother about the threads; rather they will consider that they are writing tasks. And the framework will execute those tasks in a parallel mode. The next major area is called PLIQ, which is basically a LINQ to Objects that operates in parallel mode. And the third one is Coordination data structure.

Let’s have some code snippet to get a brief idea about this.

We will use a simple console application, and see the solution explorer we are not using any assemblies as opposed to the defaults.

So parallel extensions do not require any special libraries!

The above code does, takes an input integer and doubles that, and finally finds the prime numbers from zero to that extent. Well, this is nothing quite useful, but enough to demonstrate an application. This method also writes the executed thread ID into the console window.

Now, let’s first create few threads to execute our above written method. We will create 10 threads to execute the methods simultaneously.

Here things to notice that, in this way, we have the thread instance under our control, so we can invoke methods like, Join(), Abort() etc. but, developer is responsible to manage threads by their own. The code produces following outputs.

See, we have actually 10 different threads generated to execute this. Now, let’s use the Thread Pool thread for the same business.

This generates the output like following.

Look, it is using the same thread (6) for all the work items. The .net thread pool using thread objects effectively. But in this way, we lost the control that we had into the previous snippet. Like, now we can’t cancel a certain thread directly, because we actually don’t know which thread is going to execute the work items.

Now, let’s have a view into the cool parallel extensions Task class. It’s pretty much like the Thread implementations, and allows all the methods like, Wait(), CancelAndWait() etc to interact with the task. In addition, it will take advantage of the execution environment. That means, if you run this application into a Multi core processor, it will spawn more threads to carry out the tasks. Though, as I am using this into the VPC with a single core CPU, it’s using one thread instance to carry out the tasks. But now this is not my headache, to manage threads, or even thinking about it. All these concerns are taken care of by the Parallel Framework. Cool!

This generates the same output like Thread Pool snippet, but that is only because I have used it into a VPC. On a multicore machine, it will generate more threads to get the optimal performance.

Parallel Static class

Well, this is even more interesting. It offers few iterative methods that automatically executes each iterations as a task and of course in parallel. Isn’t it a cool one?

I hope. I will explain PLINQ in my next post. Happy programming!

Extension Methods

.NET 3.o provides the feature named “Extension methods”, which is used drastically by the LINQ library. For example, the Enumerable class of System.Linq namespace declares a whole bunch of static extension methods that allows user to write Linq enabled smart looking methods on any IEnumerable instance.
For instance

Generates output like following

Here, we are using the Where method which is basically an extension method for any IEnumerable instance. The extension methods along with the Lambda expression (which is another new feature of .NET 3.0), allows us to write very verbose filter code like snippet showed above.
So, what is the Extension method?
According to the MSDN,
Extension methods enable you to “add” methods to existing types without creating a new derived type, recompiling, or otherwise modifying the original type. Extension methods are a special kind of static method, but they are called as if they were instance methods on the extended type. For client code written in C# and Visual Basic, there is no apparent difference between calling an extension method and the methods that are actually defined in a type.
I personally like this feature very much. Along with the LINQ related usage Extension methods can be very handy in some other cases.
Consider a scenario, where I have an interface that has a method with three arguments.

Now at some point, I found that it would be better to provide an overload of this method where the last argument will not present, the implementation of the interface will pass true as the default value of indent.

Now, if I do so, each of the implementers of this interface need to implement the handy overloaded version and need provide the default value true. But this seems a burden that we could take away from the implementers. Also, there are chances that somebody will implement this second method and pass false mistakenly as a default.
We can resolve this issue very neat way using extension method. Consider the following snippet.

See, the interface only contains one version of the method, implementers are also not bothered at all about the overloading version and the default value jargons. But the consumer of the interface still consuming this as this is a part of the interface, only they need to import the namespace where the extension method declared. Even the Visual Studio is also providing the result intellisense support like a regular overloaded scenario. Isn’t it nice?
Internally, what is happening? Well, this is basically a syntactical sugar, not more than that. The compiler actually generates the regular static method calls for the extension methods. Therefore, compiler actually interprets that syntax as following
So this is a compile time stuffs, during runtime, it’s nothing different from a regular static method invocation.
Like C# Visual basic also supports extension methods. But there is an exception though. Don’t use the extension method invocation syntax for any extension method that written for System.Object class . Because VB consider the System.Object class differently, and it will not generate this actual static method invocation syntax during compile time. And what will happen actually is, during runtime it will raise an exception. So be aware about it.
This feature is really a great one among the other features of .NET 3.0, we can now write some common boiler-plate codes as an extension method in an enterprise solutions. For instance, methods like, ArgumentHelper.ThrowExceptionIfNull(), String.IsNullOrEmpty() can be written with extension methods and can be used in a very handy way.

Big power has big responsibilities.

As this offer you a lot of power to write methods for any type, you need to remain aware that you are not writing unnecessary extension methods which can make other confused. Such as writing a lot of extension methods for System.Object is definitely not a good idea.
I’m expecting something called “Extension properties” which could be another good thing. I think, it should not be a difficult one, cause internally .NET properties are basically nothing but methods. Hope Microsoft will ship “Extension properties” in future version of .NET framework.
Happy programming!

How to remove SharePoint context menus selectively

I need to figure out how I could I selectively remove some Standard SharePoint list context menu. For example, most of the list context menus contain Edit Item, Delete Item etc. assume I have to keep the delete menu but need to strike out the “Edit Item”. How can we do that?


Go to the page settings. Add a new content editor web part into he page and go to the settings of this content editor web part. Open the source editor. Put the following scripts on it.

function Custom_AddListMenuItems(m, ctx)


var strDelete=”Delete this Item”;

var imgDelete=”;

var strDeleteAction=”deleteThisSelectedListItem();” ;

CAMOpt(m, strDelete, strDeleteAction, imgDelete);

// add a separator to the menu


// false means that the standard menu items should also rendered

return true;


function deleteThisSelectedListItem()


if (! IsContextSet())


    var ctx=currentCtx;

    var ciid=currentItemID;

    if (confirm(ctx.RecycleBinEnabled ? L_STSRecycleConfirm_Text : L_STSDelConfirm_Text))


        SubmitFormPost(ctx.HttpPath+”&Cmd=Delete&List=”+ctx.listName+                    “&ID=”+ciid+”&NextUsing=”+GetSource());




Finally make the content editor web part invisible. Voila!

Posting client side data to server side in ASP.NET AJAX

Often we need to bring some client side data (e.g. javascript variable’s value) into the server side for processing. Doing this usually done using hidden fields- registering hidden fields from server and modifying the values at client side using javascript and finally bringing back the modified value to the server along with a post back. I was going to do this same task from within an ASP.NET Ajax application. And I found that if using a handler for client side add_beginRequest is not sufficient to accomplish this task. The reason is the begin request is fired by the AJAX after preparing the request object. So changing the value inside this method will not reflect the value at server side.

Here is the way how we can resolve this problem. First of all we are going to register a hidden field from the server side

void Page_Load(object sender, EventArgs e)


if (!Page.IsPostBack){

ScriptManager.RegisterHiddenField(UpdatePanel1, “HiddenField1”, “”);



Now at the client side, we need to register a handler for the begin_request event of ASP.NET AJAX client side script manager.




Now, we need to modify the request object (inserting the data that we need to bring at the server end) just before it gets posted into the server. Here is how we can do it.

function onBeginRequest(sender, args)


var request = args.get_request();

var body = request.get_body();

var token = ‘&HiddenField1=’;

body = body.replace(token, token + document.getElementById(‘someElementID’).value);



Here we are opening the request object and modifying the request body by inserting the value found at a text input element.

Now at server end you will find this value

void Page_Load(object sender, EventArgs e)


if (!Page.IsPostBack) {

ScriptManager.RegisterHiddenField(UpdatePanel1, “HiddenField1”, “”);


else {

string etst = Request[“HiddenField1”]; // reading the value here!


That’s it!