Legacy monolith applications that are built to run on single beefy server can take advantage of containers to simplify the deployment model and also potentially opens possibility to re-architect piece by piece without triggering a complete rewrite. I ran into a scenario where I am considering wrap up a large monolith (with many threads in it) into multiple containers and introduce some mode of execution. Therefore, each container instance runs a specific mode of operations and leads to a micro-service-based architecture in future. Splitting into containers is rather easier, but then I needed to introduce an IPC mechanism to enable communication between these container instances. In this post, I will write some IPC options that I have exercised in these scenarios.
The application is written in .net framework; therefore, I couldn’t use .net core and Linux machines. I have only investigated windows containers. I have tried following technologies for IPC and did some bench marking on latency.
- TCP/IP channel (net TCP binding in WCF)
- Web Sockets
- Unix Domain Sockets (Did this entirely for fun 🙂 )
Environment and Hardware specs
Most of these IPC technologies (e.g. TCP, gRPC, Web Sockets) also allow remote invocations, but I have only tried on single machine- as that’s what I wanted to investigate. I have run these benchmarks on windows 10 client machine with following configuration:
BenchmarkDotNet=v0.11.5, OS=Windows 10.0.18362 Intel Core i7-8650U CPU 1.90GHz (Kaby Lake R), 1 CPU, 8 logical and 4 physical cores [Host]: .NET Framework 4.7.2 (CLR 4.0.30319.42000), 32bit LegacyJIT-v4.8.3815.0
Windows Container: Quick refresh
Windows Server containers provide application isolation through process and namespace isolation technology. That is often referred to as process-isolated containers. A Windows Server container shares a kernel with the container host and all containers running on the host. These process-isolated containers don’t provide a hostile security boundary and shouldn’t be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
However, windows containers also provide a different type of isolation – called Hyper-V isolation. Hyper-V isolation expands on the isolation provided by Windows Server containers by running each container in a highly optimized virtual machine.
In this configuration, the container host doesn’t share its kernel with other containers on the same host. These containers are designed for hostile multi-tenant hosting with the same security assurances of a virtual machine. Since these containers don’t share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (within supported versions). For example, all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Running a container on Windows with or without Hyper-V isolation is a runtime decision. We can initially create the container with Hyper-V isolation, and then later at runtime choose to run it as a Windows Server container instead.
I have run the IPC stack for each technology in three different setups.
- Bare metal (running on my windows 10 client)
- Two containers (server and client) running in Hyper-V isolation (–isolation=hyperv)
- Two containers (server and client) running in Process isolation (–isolation=process)
IPC – by its nature is a non-deterministic operation. Hence, I wanted to measure and focus on latencies in my investigation instead of throughputs. I created some IPC handshake applications that exchanges approximately 1 KB of message from client to server. I ran it in different frequencies (>10000 times) and measured the percentiles.
And of course, I am running Docker for Windows with following version:
WCF TCP/IP Channel
TCP channel is probably the most commonly used binding in WCF applications. Here I have the simple WCF server and client that sends some bytes over the wire. TCP is a connection-based, stream-oriented delivery service with end-to-end error detection and correction. Connection-based means that a communication session between hosts is established before exchanging data. A host is any device on a TCP/IP network identified by a logical IP address.
The sample hosts a TCP server and waits for clients to connect. Once the client is connected, client sends 1KB bytes to the server for a n number of times.
Running it on bare-metal:
Now running it inside two containers (server and client) in Hyper-V isolation:
We can see that it adds latency compare to Bare-metal. Let’s run it on process isolation mode:
That improved a lot, almost as bare-metal.
Like many RPC systems, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. By default, gRPC uses protocol buffers as the Interface Definition Language (IDL) for describing both the service interface and the structure of the payload messages. It is possible to use other alternatives if desired.
I have written a similar sample project as I did for TCP channel above but this time both the server and client uses gRPC for messaging.
Lets run the same exercise with gRPC.
What I see is, the process isolation is pretty darn good compare to Hyper-V isolation.
Web Sockets (over HTTPS) gives a easy programming model for network communication. The WebSocket API is an advanced technology that makes it possible to open a two-way interactive communication session between the user’s browser and a server. With this API, you can send messages to a server and receive event-driven responses without having to poll the server for a reply.
I didn’t write or programmed web socket API directly though, I have used the SignalR self-hosting to do that.
In this exercise I created the same messaging with web sockets.
Unix Domain Sockets
Like I have mentioned above, Linux and .net core was not my option for this exercise. However, I couldn’t resist give it a shot running the same messaging over an Unix-domain-socket on Linux kernel.
A Unix domain socket or IPC socket is a data communications endpoint for exchanging data between processes executing on the same host operating system. Valid socket types in the UNIX domain are: SOCK_STREAM (compare to TCP), for a stream-oriented socket; SOCK_DGRAM (compare to UDP), for a datagram-oriented socket that preserves message boundaries (as on most UNIX implementations, UNIX domain datagram sockets are always reliable and don’t reorder datagrams); and SOCK_SEQPACKET (compare to SCTP), for a sequenced-packet socket that is connection-oriented, preserves message boundaries, and delivers messages in the order that they were sent. The Unix domain socket facility is a standard component of POSIX operating systems.
Here’s what I get when running the similar messaging that leverages Unix domain sockets:
That’s blazing fast! Sadly I couldn’t use it for my purpose.
Putting all the numbers into a chart, I get this:
Disclaimer 1: Bench marking is difficult – it has so many moving factors to get everything right. I wouldn’t put any conclusive statement on it, like certain IPC technique is faster than other. But the source codes are included and you can run it on your environment and make your one judgement.
Disclaimer 2: Another interesting technology I needed to try out was windows named-pipes. The source code is in the same repository, but I couldn’t get it to work while sharing between containers. I will update the post once I have some progress there.
All remarks/questions are always welcome, Thanks for reading.