RabbitMQ High-availability clusters on Azure VM

Background

Recently I had to look into a reliable AMQP solution (publish-subscribe queue model) in order to build a message broker for a large application. I started with the Azure service bus and RabbitMQ. It didn’t took long to understand that RabbitMQ is much more attractive over service bus because of their efficiency and cost comparisons when there are large number of messages. See the image taken from Mariusz Wojcik’s blog.

Setting up RabbitMQ on a windows machine is relatively easy. RabbitMQ web site nicely documented how to do that. However, when it comes to install RabbitMQ cluster on some cloud VMs, I found Linux (Ubuntu) VMs are handier for their faster booting. For quite a long time I haven’t used the *nix OS, so found the journey really interested to write a post about it.

Spin up VMs on Azure

We need two Linux VMs, both will have RabbitMQ installed as server and they will be clustered. The high level picture of the design looks like following:

Login to the Azure portal and create two VM instances based on the Ubuntu Server 14.04 LTS images on Azure VM depot.

I have named them as MUbuntu1 and MUbuntu2. The VMs need to be in the same cloud service and the same availability set, to achieve redundancy and high availability. The availability set ensures that Azure Fabric Controller will recognize this scenario and will not take all the VMs down together when it does maintenance tasks, i.e. OS patch/updates for example.

Once the VM instances are up and running, we need to define some endpoints for RabbitMQ. Also they need to be load balanced. We go to the MUbuntu1 details in management portal and add two endpoints-port 15672 and port 5672 one for RabbitMQ connection from client applications another for RabbitMQ management portal application. Scott Hanselman has described the details how to create load balanced VMs. Once we create them it will look like following:

Now we can SSH into both of these machines, (Azure already mapped the SSH port 22 to a port which can be found on the right side of the dashboard page for the VM).

Install RabbitMQ

Once we SSH into the terminals of both of the machines we can install RabbitMQ by executing the following commands:



sudo add-apt-repository 'deb http://www.rabbitmq.com/debian/ testing main'
sudo apt-get update
sudo apt-get -q -y --force-yes install rabbitmq-server

The above apt-get will install the Erlang and RabbitMQ server on both machines. Erlang nodes use a cookie to determine whether they are allowed to communicate with each other – for two nodes to be able to communicate they must have the same cookie. Erlang will automatically create a random cookie file when the RabbitMQ server starts up. The easiest way to proceed is to allow one node to create the file, and then copy it to all the other nodes in the cluster. On our VMs the cookie will be typically located in /var/lib/rabbitmq/.erlang.cookie

We are going to create the cookie in both machines by executing the following commands



echo 'ERLANGCOOKIEVALUE' | sudo tee /var/lib/rabbitmq/.erlang.cookie
sudo chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
sudo chmod 400 /var/lib/rabbitmq/.erlang.cookie
sudo invoke-rc.d rabbitmq-server start

Install Management portal for RabbitMQ

Now we can also install the RabbitMQ management portal so we can monitor the Queue from a browser. Following commands will install the management plugin:



sudo rabbitmq-plugins enable rabbitmq_management
sudo invoke-rc.d rabbitmq-server stop
sudo invoke-rc.d rabbitmq-server start

So far so good. Now we create a user that we want to use to connect the queue from the clients and monitoring. You can manage users anytime later too.



sudo rabbitmqctl add_user
sudo rabbitmqctl set_user_tags administrator
sudo rabbitmqctl set_permissions -p / '.*' '.*' '.*'

Configuring the cluster

So far we have two RabbitMQ server up and running, it’s time to connect them as cluster. To do so, we need to go to one of the machines and join the cluster. The following command will do that:


sudo rabbitmqctl stop_app
sudo rabbitmqctl join_cluster rabbit@MUbuntu1
sudo rabbitmqctl start_app
sudo rabbitmqctl set_cluster_name RabbitCluster

We can verify if the cluster is configured properly via RabbitMQ management portal:

Or from SSH terminal:

Queues within a RabbitMQ cluster are located on a single node by default. They need to be made mirrored across multiple nodes. Each mirrored queue consists of one master and one or more slaves, with the oldest slave being promoted to the new master if the old master disappears for any reason. Messages published to the queue are replicated to all slaves. Consumers are connected to the master regardless of which node they connect to, with slaves dropping messages that have been acknowledged at the master. Queue mirroring therefore enhances availability, but does not distribute load across nodes (all participating nodes each do all the work). This solution requires a RabbitMQ cluster, which means that it will not cope seamlessly with network partitions within the cluster and, for that reason, is not recommended for use across a WAN (though of course, clients can still connect from as near and as far as needed). Queues have mirroring enabled via policy. Policies can change at any time; it is valid to create a non-mirrored queue, and then make it mirrored at some later point (and vice versa). More on this are documented in RabbitMQ site. For this example, we will replicate all queues by executing this on SSH:


rabbitmqctl set_policy ha-all "" '{"ha-mode":"all","ha-sync-mode":"automatic"}'

That should be it. The cluster is now up and running, we can create a quick .NET console application to test this. I have created 2 console applications and a library that has one class as the message contract. VS Solution looks like this:

We will use EasyNetQ to connect to the RabbitMQ, which we can nuget in publisher and subscriber project.

In the contract project (class library), we have following classes in a single code file


namespace Contracts
{
public class RabbitClusterAzure
{
public const string ConnectionString =
@"host=;username=;password=";
}


public class Message
{
public string Body { get; set; }
}
}

The publisher project has the following code in program.cs


namespace Publisher
{
class Program
{
static void Main(string[] args)
{
using (var bus = RabbitHutch.CreateBus(RabbitClusterAzure.ConnectionString))
{
var input = "";
Console.WriteLine("Enter a message. 'Quit' to quit.");
while ((input = Console.ReadLine()) != "Quit")
{
Publish(bus, input);
}
}
}

private static void Publish(IBus bus, string input)
{
bus.Publish(new Contracts.Message
{
Body = input
});
}
}
}

Finally, the subscriber project has the following code in the program.cs


namespace Subscriber
{
class Program
{
static void Main(string[] args)
{
using (var bus = RabbitHutch.CreateBus(RabbitClusterAzure.ConnectionString))
{
var retValue = bus.Subscribe("Sample_Topic", HandleTextMessage);

Console.WriteLine("Listening for messages. Hit to quit.");
Console.ReadLine();
}
}

static void HandleTextMessage(Contracts.Message textMessage)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("Got message: {0}", textMessage.Body);
Console.ResetColor();
}
}
}

Now we can run the Publisher and multiple instance of subscriber and it will dispatch messages in round-robin (direct exchange). We can also take one of the VM down and it will not lose any messages.

We can also see the traffics to the VMs (cluster instance too) directly from Azure portal.

Conclusion

I have to admit, I found it extremely easy and convenient to configure up and run RabbitMQ clusters. The steps are simple and setting it up just works.

It’s harder to read code than to write it

When I started writing code for commercial projects ten years back (around 2003), I’ve learned an important lesson. I was assigned to write a function that serializes a data structure into XML string and send that in a SOAP body. I was very fast writing the module that mostly uses concatenation of BSTR objects in Visual C++ 6.0 in XML tags. But my mentor at that time was not happy when we were doing a review on that function. He told me to use the existing library functions (MSXML::IXMLDOMDocument2Ptr and DOMDocument6 etc) to do the job. I had no clue at that time why he was saying so. I never worked before with MSXML at that time. It was easy for me to write it with BSTR rather than reading the MSXML APIs for hours and going through all the hassles of it. I was really annoyed with this.

I am still not judging if he was right or wrong, but one thing for sure I learned from him (later when I spent more time in my profession) is that, I should have learned what MSXML is capable of and how to use that. And probably after I did so, I might actually would used that library instead of writing a new one.

Writing new code apparently sounds, looks easy. But it has a cost associated. It has to be maintained. More people need to be aware about this code. This is especially true when it comes to write something that is already released. One may get an escape re-writing something which was never shipped. But one should always think twice re-writing something that is released. It may nasty, hard to read, but it’s tested, bugs were found and fixed and it has those knowledge embedded into it. Rewriting, often comes with high chances that it will reintroduce some new set of bugs that will have to be fixed and maintained. Since that early lesson learned, I have been through many situations where I felt, rewriting is the easiest solution that comes first in mind. But if the old code was released I really push myself to think and reconsider, if I really need a rewrite.

The issue sadly exists in a larger scale as well. When it comes to architect new solution, the same philosophy kicks in, it feels more comfortable to rewrite a new solution entirely rather than assemble the exisitng product modules and bring them gradually into the new platform. I think the culprit is the same in both cases. It’s unwillingness to read and understand the existing product modules what drives us to think recreating the solution is the best way to go.

Fundamentally, I feel it’s an issue of reading vs writing codes. I summed up all the events, I have experienced, I should rewrite, I have realized that almost in every instances, I was reluctant to read the exisitng codes. Which led me to the direction to think of a rewrite. This may feel like the right thing to do when I think as an individual, as a programmer. But it’s hell wrong when I evaluate the decision from the organization perspective. It almost never gives a ROI.

I could never explain this better than the way Joel explained it before in his blog:

 

“Netscape 6.0 is finally going into its first public beta. There never was a version 5.0. The last major release, version 4.0, was released almost three years ago. Three years is an awfully long time in the Internet world. During this time, Netscape sat by, helplessly, as their market share plummeted. It’s a bit smarmy of me to criticize them for waiting so long between releases. They didn’t do it on purpose, now, did they? Well, yes. They did. They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch.”

 

Very recently I have had a circumstance where, I could (and honestly I felt) to rewrite the software because it’s using socket IO and manual Xml based messaging on it. But I refrained myself to stop thinking in that direction that apparently looks catchy. Like Joel wrote:

 

We’re programmers. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We’re not excited by incremental renovation: tinkering, improving, planting flower beds.

 

Being in the architect position I had the privilege to set the direction. I had to convince lot of people/stakeholders that I don’t feel we should rewrite this from day one. Instead we should take a pragmatic approach to seal the existing code into modules and interface that with more sophisticated technologies and gradually remove them form the stack. I am still unsure if the direction will bring success to us, but I am certain that the chances are much higher than the other way around.

Quick and easy self-hosted WCF services

I realized that I am not writing blogs for a long time. I feel bad about that, this post is an attempt to get out of the laziness.

I often find myself writing console applications that have a simple WCF service and a client that invokes that to check different stuffs. Most of the time, I want to have a quick service that is hosted using either NetTcpBinding or WsHttpBinding with very basic configurations. Which triggers the urge writing a bootstrap mechanism to easily write and host WCF services and consume them at ease. I am planning to extend the implementation into a more richer one gradually, but I have something already to do a decent kick off. Here’s how it works.

Step 1 : Creating the contract

You need to create a class library where you can have your contract interfaces for the service you are planning to write. Something like following



[ServiceContract(Namespace = "http://abc.com/enterpriseservices")]
public interface IWcf
{
[OperationContract]
string Greet(string name);
}

Now you need to copy the WcfService.cs file into the same project. This file contain one big class named WcfService. That has the public methods to host services and also creating client proxies to invoke them. The class can be downloaded from this Git (https://github.com/MoimHossain/WcfServer) repository. Once you have it added into your project, go to step 2.

Step 2 : Creating Console Server project.

Create a console application that will host the service. Add a reference to the project created in step 1. Define your service implementation class as follows




// Sample service
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
public class MyService : IWcf
{
public string Greet(string name)
{
return DateTime.Now.ToString() + name;
}
}

Finally modify the program.cs to have something like following



class Program
{
static void Main(string[] args)
{
try
{
// use WcfService.Tcp for NetTcp binding or WcfService.Http for WSHttpBinding

var hosts = WcfService.DefaultFactory.CreateServers(
new List { typeof(MyService) },
(t) => { return t.Name; },
(t) => { return typeof(IWcf); },
"WcfServices",
8789,
(sender, exception) => { Trace.Write(exception); },
(msg) => { Trace.Write(msg); },
(msg) => { Trace.Write(msg); },
(msg) => { Trace.Write(msg); });

Console.WriteLine("Server started....");
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
Console.ReadKey();
}
}


At this point you should be able to hit F5 and run the server console program.

Step 3 : Creating Console client project

Create another console application and modify the program.cs to something like following




class Program
{
static void Main(string[] args)
{
try
{
// use WcfService.Tcp for NetTcp binding or WcfService.Http for WSHttpBinding

using (var wcf =
WcfService.DefaultFactory.CreateChannel(Environment.MachineName, 8789, (t) => { return "MyService"; }, "WcfServices"))
{
var result = wcf.Client.Greet("Moim");

Console.WriteLine(result);
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}

You are good to go! Hope this helps somebody (at least myself).

Prompt for Save changes in MMC 3.0 Application (C#)

Microsoft Management Console 3.0 is a managed platform to write and host application inside the standard Windows configuration console. It provides a simple, consistent and integrated management user interface and administrative console. In one of the product I am currently working with uses this MMC SDK. We used it to develop the configuration panel of administrative purposes.

That’s the quick background description of this post, however, this post is not meant for a guy who never worked with MMC SDK. I am assuming you already know all the basics about it.

In our application it has quite a number of ScopeNodes and each of them has associated details pages (in MMC terminology they are View, in our case most of them are FormView ). All of them have application data rendered in a WinForm’s UserControl. We allow user to modify those configuration data in place.
But the problem begins when user move to a different scope node and then close the MMC console window. Now the changes the user made earlier are not saved. As you already know that MMC console is an MDI application and the scope nodes are completely isolated from each others. Therefore, you can’t prompt user to save or discard the pending changes.

I googled a lot to get a solution for this, but ended up with a myriad of frustrations. Many people also faced the same problem and later they implemented with a pop-up dialog, so that they can handle the full lifecycle of saving functionalities. But that causes a lot of development works when you have too many scope nodes. You need to define 2 forms for each of them. One for the read-only display, another is a pop-up to edit the data. For my product, it was not viable really. In fact, it’s even looks nasty to get a pop-up for each node configuration.
Anyway, I finally resolved my problem by myself. It’s not a very good way to settle this issue, but it works superb for my purpose. So here is what I did to fix this problem. I have a class that will intercept the Windows messages and will take action while user is trying to close the main Console.



/// Subclassing the main window handle's WndProc method to intercept
/// the close event
internal class SubClassHWND : NativeWindow
{
private const int WM_CLOSE = 0x10;
private MySnapIn _snapIn; // MySnapIn class is derived from SnapIn (MMC SDK)
private List _childNativeWindows;

// Constructs a new instance of this class
internal SubClassHWND(MySnapIn snapIn)
{
this._snapIn = snapIn;
this._childNativeWindows = new List();
}

// Starts the hook process
internal void StartHook()
{
// get the handle
var handle = Process.GetCurrentProcess().MainWindowHandle;

if (handle != null && handle.ToInt32() > 0)
{
// assign it now
this.AssignHandle(handle);
// get the childrens
foreach (var childHandle in GetChildWindows(handle))
{
var childSubClass = new SubClassHWND(this._snapIn);

// assign this
childSubClass.AssignHandle(childHandle);

// keep the instance alive
_childNativeWindows.Add(childSubClass);
}
}
}

// The overriden windows procedure
protected override void WndProc(ref Message m)
{
if (_snapIn != null && m.Msg == WM_CLOSE)
{ // if we have a valid snapin instance
if (!_snapIn.CanCloseSnapIn(this.Handle))
{ // if we can close
return; // don't close this then
}
}
// delegate the message to the chain
base.WndProc(ref m);
}

// Requests the handle to close the window
internal static void RequestClose(IntPtr hwnd)
{
SendMessage(hwnd.ToInt32(), WM_CLOSE, 0, 0);
}

// Send a Windows Message
[DllImport("user32.dll")]
public static extern int SendMessage(int hWnd,
int Msg,
int wParam,
int lParam);


[DllImport("user32")]
[return: MarshalAs(UnmanagedType.Bool)]
public static extern bool EnumChildWindows(IntPtr window, EnumWindowProc callback, IntPtr i);

public static List GetChildWindows(IntPtr parent)
{
List result = new List();
GCHandle listHandle = GCHandle.Alloc(result);
try
{
EnumWindowProc childProc = new EnumWindowProc(EnumWindow);
EnumChildWindows(parent, childProc, GCHandle.ToIntPtr(listHandle));
}
finally
{
if (listHandle.IsAllocated)
listHandle.Free();
}
return result;
}

private static bool EnumWindow(IntPtr handle, IntPtr pointer)
{
GCHandle gch = GCHandle.FromIntPtr(pointer);
List list = gch.Target as List;
if (list == null)
{
throw new InvalidCastException("GCHandle Target could not be cast as List");
}
list.Add(handle);
// You can modify this to check to see if you want to cancel the operation, then return a null here
return true;
}

public delegate bool EnumWindowProc(IntPtr hWnd, IntPtr parameter);
}

This class has the “WndProc” – windows message pump (or dispatcher method) method that will receive all the messages that are sent to the MMC console main window. Also this class will set message hook to all the child windows hosted inside the MMC MDI window.

Now we will only invoke the StartHook method from the SnapIn to active this interception hook.



// The subclassed SnapIn for my application
public class MySnapIn : SnapIn
{
internal SubClassHWND SubClassHWND
{
get;
private set;
}

protected MySnapIn()
{
// create the subclassing support
SubClassHWND = new SubClassHWND(this);
}

// Start the hook now
protected override void OnInitialize()
{
SubClassHWND.StartHook()
}

Now we have the option to do something before closing the application, like prompting with a Yes, No and Cancel dialog like NotePad does for a dirty file.



// Determins if the snapin can be closed now or not
internal bool CanCloseSnapIn(IntPtr requestWindow)
{
if (IsDirty)
{ // found a node dirty, ask user if we can
// close this dialog or not
this.BeginInvoke(new System.Action(() =>
{
using (var dlg = new SnapInCloseWarningDialog())
{
var dlgRes = Console.ShowDialog(dlg);

switch (dlgRes)
{
case DialogResult.Yes:
SaveDirtyData(); // save them here
IsDirty = false; // set to false, so next
// time the method
// will not prevent
// closing the application
SubClassHWND.RequestClose(requestWindow);
break;
case DialogResult.No:
IsDirty = false;
SubClassHWND.RequestClose(requestWindow);
break;
case DialogResult.Cancel: break;// Do nothing
}
}
}));
return false;
}
return true;
}


One small problem remains though. The dispatcher method gets the WM_CLOSE in a thread that can’t display a Window due to the fact that the current thread is not really a GUI thread. So we have to do a tricky solution there. We need to display the prompt by using a delegate (using BeginInvoke) and discard the current WM_CLOSE message that we intercepted already.
Later when a choice has been made (user selected yes, no or cancel), if they selected ‘Yes’ then we have to close the application after saving the data. If ‘no’ selected we will have to close the SnapIn as well. Only for ‘Cancel’ we don’t have to do anything. So only thing is critical is how we can close this window again. Here is how we can do that:

Notice that SnapIn’s CanCloseSnapIn method does have a parameter which is the pointer (an instance of IntPtr in this case) of the window handle that has been closed by the user. This has been done on purpose. This will offer the possiblity to send a WM_CLOSE again to that same window. So even if user closes the MDI child it will only close the child window only after save- which is just perfect!

Hope this will help somebody struggling with the same gotcha.

Custom SPGridView

Recently I had to create a custom Grid control that allows user to do grouping, sorting, paging and filtering data displaying in it. I spent few days to figure out if there are third party controls (like Xceed, Infragistics etc) that can meet my requirements. Sadly I found that these controls can’t fulfill what I wanted to do.

Well, my requirements were as follows

1. The grid should be used to display a *really large* amount of rows. So fetching every rows from Database will be killing the Application simply.
2. The grid should allow doing sorting at the database level, then the Database indexes can be used for efficiency.
3. The filtering also should be performed by the Database engine.
4. It should allow user to group data in it. As you probably understand that the grouping should also be done by the Database engine.
5. It should offer the similar look-and-feel of native SharePoint 2007 lists.

Now when I tried to find a control (commercial or free) that offers these features, I had no other options but to be disappointed.

Almost all of the controls that I’ve visited offers all these features but they ask for the entire DataTable in order to bind it. Which is simply not possible for my purpose. Even if I would have used the ObjectDataProvider still I couldn’t do the grouping at the Database end.

Interestingly the SPGridView control (shipped by Microsoft) also doesn’t allow multiple groupings and in grouped scenarios, the other features like, Filtering, Paging, Sorting doesn’t work properly.

Therefore, I have created a custom Grid Control that makes me happy. And eventually I did it. It’s not that difficult. I did a complete UI rendition inside the Grid control. Provided an Interface that should be implemented by the client application to provide the data.

It’s working for me now. Still I am doing some QC to make sure it’s decent enough. After that I’m gonna provide the source of that control. But for now let’s have a look onto the video where the Grid control can be seen in Action!

Stay tuned!

jQuery Autocomplete inside partially rendered user control?

Yesterday a very small issue put me into the hell for almost an hour. Finally I was able to figure out a workaround for me.

I am not claiming this is the best approach to solve this problem or stuff like that, but definitely it worked for me nice, and I did not find any downside of this yet.

Well, let’s get into the story, I was using the awesome jQuery Autocomplete plug-in inside my ASP.net MVC application. Initially I put this autocomplete textbox into the site.master page and it worked superb! I used the HtmlHelper extension model to accomplish my task. John Zablocki has written a wonderful post regarding this approach. I basically used his approach with subtle modifications. For those who has no idea what jQuery Autocomplete plugin is, it’s a regular html text input control that is registered by the jQuery autocomplete plugin library and when user tries to write something in it, it made an ajax request to the server to get the possible values for the given characters use has been typed. Again, please read the John’s blog for a good idea about this.

My problem begins when I moved this control from the master page and put inside a view user control in my ASP.net MVC application- that renders partially. To explain this more clearly, my user control has the following code

Now, from the Index.aspx (I am talking about the Home\Index.aspx -which generates for a hello world ASP.net MVC application), I am rendering this user control using AJAX. And I noticed that the jQuery is not working. Why??

Well, after spending few minutes I understood that, the script that used to register this control with jQuery is not executing when the DOM is modified using AJAX. Which means that if you write a simple ascx control, with only the following script code,

alert(“Hi”);>/script>

And render this ascx using Ajax. You will not see anything where I was expecting to see the message box saying “Hi” to me. Well, when you modify the DOM dynamically if there are any scripts that are not executed automatically, unless you do something to force this. This is the reason why my jQuery plugin was not working. So what I did is, forced the browser to load all the scripts that returned by the server during the AJAX call. And I loaded the javascript in window level- it actually depends on you-where it bests suit you.

So I was able to make this running-with a OnSuccess event handler for my AJAX call, and once it finished the AJAX invocation, I did a force load of all the scripts block resides into the ascx. Voila!

Parallel Extensions of .NET 4.0

Last night, I was playing around with some cool new features of .net framework 4.0, which a CTP released as a VPC, can be downloaded from here.

There are many new stuffs Microsoft planned to release with Visual Studio 10. Parallel Extension is one of them. Parallel extension is a set of APIs that is embedded under the System namespace and inside mscorlib assembly. Therefore, programmers do not need to use a reference from any other assembly to get benefit of this cool feature. Rather they will get this out of the box.

Well, what is Parallel Extensions all about?

Microsoft’s vision is, for multithreaded application, developers need to focus on many issues, regarding managing the threads, scalability and so on. Parallel extensions are an attempt to allow developers to focus onto the core business functionality, rather the thread management stuffs. It provides some cool way to manage concurrent applications. Parallel extensions mainly comprises into three major areas, the first one is the Task Parallel library. There is a class Task which the developer should worry about. They will not bother about the threads; rather they will consider that they are writing tasks. And the framework will execute those tasks in a parallel mode. The next major area is called PLIQ, which is basically a LINQ to Objects that operates in parallel mode. And the third one is Coordination data structure.

Let’s have some code snippet to get a brief idea about this.

We will use a simple console application, and see the solution explorer we are not using any assemblies as opposed to the defaults.

So parallel extensions do not require any special libraries!

The above code does, takes an input integer and doubles that, and finally finds the prime numbers from zero to that extent. Well, this is nothing quite useful, but enough to demonstrate an application. This method also writes the executed thread ID into the console window.

Now, let’s first create few threads to execute our above written method. We will create 10 threads to execute the methods simultaneously.

Here things to notice that, in this way, we have the thread instance under our control, so we can invoke methods like, Join(), Abort() etc. but, developer is responsible to manage threads by their own. The code produces following outputs.

See, we have actually 10 different threads generated to execute this. Now, let’s use the Thread Pool thread for the same business.

This generates the output like following.

Look, it is using the same thread (6) for all the work items. The .net thread pool using thread objects effectively. But in this way, we lost the control that we had into the previous snippet. Like, now we can’t cancel a certain thread directly, because we actually don’t know which thread is going to execute the work items.

Now, let’s have a view into the cool parallel extensions Task class. It’s pretty much like the Thread implementations, and allows all the methods like, Wait(), CancelAndWait() etc to interact with the task. In addition, it will take advantage of the execution environment. That means, if you run this application into a Multi core processor, it will spawn more threads to carry out the tasks. Though, as I am using this into the VPC with a single core CPU, it’s using one thread instance to carry out the tasks. But now this is not my headache, to manage threads, or even thinking about it. All these concerns are taken care of by the Parallel Framework. Cool!

This generates the same output like Thread Pool snippet, but that is only because I have used it into a VPC. On a multicore machine, it will generate more threads to get the optimal performance.

Parallel Static class

Well, this is even more interesting. It offers few iterative methods that automatically executes each iterations as a task and of course in parallel. Isn’t it a cool one?

I hope. I will explain PLINQ in my next post. Happy programming!