If you have used Azure function, you probably are aware that Azure Functions leverages a Storage Account underneath to support the file storage (where the function app code resides as Azure File share) and also as a backing store to keep Functions Keys (the secrets that are used in Function invocations).
If you look inside the container there are files with following contents:
Figure: These JSON files has the function keys
Figure: Encrypted master keys and other function keys
I have been in a conversation where; it was not appreciated to see the keys stored in the storage account. The security and governance team was seeking for a better place to keep these keys. Where secrets can be further restricted from developer access.
Of course, we can create a VNETaround the storage accountand use private link but that has some other consequence as the content (functions implementations artifacts) stored also into the storage account. Configuring two separate storage account can address this better, however, this can make the setup complicated than it has to be.
A better option could be to store this keys into a Key Vault as backing store – which is a great feature of Azure functions, but I’ve found few people are aware of this due to lack of documentations. In this article I will show you how to move these secrets to a Key Vault.
To do so, we need to configure few Application Settings into the Function App. They are given below:
App Settings name
<Key Vault Name>
<Connection String or Leave it empty with Managed Identity configured on Azure Functions>
Once you have configured the above settings, you need to enable Managed Identity on your Azure Function. You will have to accomplish that in Identity section under platform features tab. That is a much better option in my opinion as we don’t need to maintain any more secrets to talk to Key vault securely. Go ahead and turn the system identity toggle on. This will create a service principal with the same name as Azure Function application you have.
Figure: Enabling system assigned managed identity on Function app
Next step is to add a rule to the key vault’s access policies for the service principal created in earlier step.
Figure: Key vault Access policy
That’s it, hit your function app now and you will see the keys are stored inside the Key vault. You can safely delete the container from the storage account now.
Figure: Secrets are stored in Key Vault
Hope this will save time when you are concerned to keep the keys in storage account.
The Azure Function is open sourced and is in GitHub. You can have a look into the sources and see other interesting ideas that you may play with.
In many organizations, specially in large enterprises there’s a need to automate Azure DevOps projects and Teams members. Manually managing large number of Azure DevOps projects, Teams for these projects and users to the teams, on-boarding and off-boarding team members are not trivial.
Besides managing the users sometimes, we just need to have an overview (a documentation?) of users and Teams of Projects. Terraform is a great tool for Infrastructure as Code – which not only allows providing infrastructure on demand, but also gives us nice documentation which can be versioned control in a source control system. The workflow kind of looks like following:
I am developing a Terraform Provider for Azure DevOps that helps me use Terraform for provisioning Azure DevOps projects, Teams and members. In this article I will share how I am building it.
This provider doesn't implement the complete set of
Azure DevOps REST APIs.
Its limited to only projects, teams and member associations.
It's not recommended to use it in production scenarios.
Terrafom is an amazing tool that lets you define your infrastructure as code. Under the hood it’s an incredibly powerful state machine that makes API requests and marshals resources. Terraform has lots of providers – almost for every major cloud – out there. Including many other systems – like Kubernetes, Palo-Alto Networks etc.
In nutshell if any system has REST API that can be manipulated with Terraform Provider. Azure DevOps also has a terraform provider – which doesn’t currently provide resources to create Teams and members. Hence, I am writing my own – shamelessly using/stealing the Microsoft’s Terraform provider (referenced above) for project creation.
Setting up GO Environment
Terraform Providers and plugins are binaries that Terraform communicates during runtime via RPC. It’s theoretically possible to write a provider in any language, but to be honest, I haven’t come across any providers that were written other languages than GO. Terraform provide helper libraries in Go to aid in writing and testing providers.
I am developing in Windows 10 and didn’t want to install GO on my local machine. Containers come to rescue of course. I am using the “Remote development” extension in VS Code. This extension allows me to keep the source code in local machine and compile, build the source code in a container like Magic!
Figure: Remote Development extension in VSCode – running container to build local repository.
Creating the provider
To create a Terraform provider we need to write the logic for managing the Creation, Reading, Updating and Deletion (CRUD) of a resource (i.e. Azure DevOps project, Team and members in this scenario) and Terraform will take care of the rest; state, locking, templating language and managing the lifecycle of the resources. Here in this repository I have a minimum implementation that supports creating Azure DevOps projects, Teams and its members.
First of all we define our provider and resources in main.go file.
Next to that, we will define the provider schema (the attributes it supports as input and outputs, resources etc.)
We are using Azure DevOps personal Access token to communicate to the Azure DevOps REST API. The GO client for Azure DevOps from Microsoft – which is used as dependency, immensely simplified the implementation and also helped learning the flow.
We can compile the provider application using following command:
> GOOS=windows GOARCH=amd64 go build -o terraform-provider-azuredevops.exe
As I am using Dabiandocker image for GoLang I need to specify my target OS (GOOS=windows) and CPU Architecture (GOARCH=amd64) when I build the provider. This will produce the terraform provider for Azure DevOps executable.
Although it’s executable, it’s not meant to launch directly from command prompt. Instead, I will copy it to “%APPDATA%\ terraform.d\plugins\windows_amd64” folder of my machine.
Terraform Script for Azrue DevOps
Now we can write the Terraform file (.tf) that will describe the Azure DevOps Project, Team and members etc.
With this terraform file, we can now launch the following command to initialize our terraform environment.
The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control.
The terraform plancommand is used to create an execution plan. Terraform performs a refresh, and then determines what actions are necessary to achieve the desired state specified in the configuration files. This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. For example, terraform plan might be run before committing a change to version control, to create confidence that it will behave as expected.
Figure: terraform plan output – shows exactly what is going to happen if we apply these changes to Azure DevOps
The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan. We will launch it with an “-auto-approve” flag to assert the approval prompt.
Now we can go to our Azure DevOps and sure enough there’s a new project created with the configuration as we scripted in Terraform file.
Taking it further
Now we can check in the terraform file (main.tf above) into an Azure DevOps repository and put a Branch policy to it. That will force any changes (such as creating new projects, adding removing team members) would requrie a Pull-Request and needs to be reviewed by peers (four-eyes principles). Once Pull-Requests are approved, a simple Azure Pipeline can trigger that does the terraform apply. And I have my workflow automated and I also have nice histories in GIT – which records the purpose of any changes made in past.
A while ago, I have built an web-based self-service portal that facilitated multiple teams in the organisation, setting up their Access Control (ACLs) for corresponding data lake folders.
The portal application was targeting Azure Data Lake Gen 1. Recently I wanted to achieve the same but on Azure Data Lake Gen 2. At the time of writing this post, there’s no official NuGet package for ACL management targeting Data Lake Gen 2. One must rely on REST API only.
Further more, the REST API documentations do not provide example snippets like many other Azure resources. Therefore, it takes time to demystify the REST APIs to manipulate ACLs. Good new is, I have done that for you and will share a straight-forward C# class that wraps the details and issues correct REST API calls to a Data Lake Store Gen 2.
About Azure Data Lake Store Gen 2
Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics. Data Lake Storage Gen2 is significantly different from it’s earlier version known as Azure Data Lake Storage Gen1, Gen2 is entirely built on Azure Blob storage.
Data Lake Storage Gen2 is the result of converging the capabilities of two existing Azure storage services, Azure Blob storage and Azure Data Lake Storage Gen1. Gen1 Features such as file system semantics, directory, and file level security and scale are combined with low-cost, tiered storage, high availability/disaster recovery capabilities from Azure Blob storage.
Let’s get started!
Create a Service Principal
First we would need a service principal. We will use this principal to authenticate to Azure Active Directory (using OAuth 2.0 protocol) in order to authorize our REST calls. We will use Azure CLI to do that.
az ad sp create-for-rbac --name ServicePrincipalName
Add required permissions
Now you need to grant permission for your application to access Azure Storage.
Click on the application Settings
Click on Required permissions
Click on Add
Click Select API
Filter on Azure Storage
Click on Azure Storage
Click the checkbox next to Access Azure Storage
Now we have Client ID, Client Secret and Tenant ID (take it from the Properties tab of Azure Active Directory – listed as Directory ID).
Access Token from Azure Active Directory
Let’s write some C# code to get an Access Token from Azure Active Directory:
Creating ADLS Gen 2 REST client
Once we have the token provider, we can jump in implementing the REST client for Azure Data Lake.
Data Lake ACLs and POSIX permissions
The security model for Data Lake Gen2 supports ACL and POSIX permissions along with some extra granularity specific to Data Lake Storage Gen2. Settings may be configured through Storage Explorer or through frameworks like Hive and Spark. We will do that via REST API in this post.
There are two kinds of access control lists (ACLs), Access ACLs and Default ACLs.
Access ACLs: These control access to an object. Files and folders both have Access ACLs.
Default ACLs: A “template” of ACLs associated with a folder that determine the Access ACLs for any child items that are created under that folder. Files do not have Default ACLs.
Here’s the table of allowed grant types:
While we define ACLs we need to use a short form of these grant types. Microsoft Document explained these short form in below table:
However, in our code we would also simplify the POSIX ACL notations by using some supporting classes as below. That way REST client consumers do not need to spend time building the short form of their aimed grant criteria’s.
Now we can create methods to perform different REST calls, let’s start by creating a file system.
Here we are retrieving a Access Token and then issuing a REST call to Azure Data Lake Storage Gen 2 API to create a new file system. Next, we will create a folder and file in it and then set some Access Control to them.
Let’s create the folder:
And creating file in it. Now, file creation (ingestion in Data Lake) is not that straight forward, at least, one can’t do that by a single call. We would have to first create an empty file, then we can write some content in it. We can also append content to an existing file. Finally, we would require to flush the buffer so the new content gets persisted.
Let’s do that, first we will see how to create an empty file:
The above snippet will create an empty file, now we will read all content from a local file (from PC) and write them into the empty file in Azure Data Lake that we just created.
Right! Now time to set Access control to the directory or files inside a directory. Here’s the method that we will use to do that.
The entire File system REST API class can be found here. Here’s an example how we can use this methods from a console application.
Until, there’s an Official Client Package released, if you’re into Azure Data Lake Store Gen 2 and wondering how to accomplish these REST calls – I hope this post helped you to move further!
Legacy monolith applications that are built to run on single beefy server can take advantage of containers to simplify the deployment model and also potentially opens possibility to re-architect piece by piece without triggering a complete rewrite. I ran into a scenario where I am considering wrap up a large monolith (with many threads in it) into multiple containers and introduce some mode of execution. Therefore, each container instance runs a specific mode of operations and leads to a micro-service-based architecture in future. Splitting into containers is rather easier, but then I needed to introduce an IPC mechanism to enable communication between these container instances. In this post, I will write some IPC options that I have exercised in these scenarios.
The application is written in .net framework; therefore, I couldn’t use .net core and Linux machines. I have only investigated windows containers. I have tried following technologies for IPC and did some bench marking on latency.
Most of these IPC technologies (e.g. TCP, gRPC, Web Sockets) also allow remote invocations, but I have only tried on single machine- as that’s what I wanted to investigate. I have run these benchmarks on windows 10 client machine with following configuration:
BenchmarkDotNet=v0.11.5, OS=Windows 10.0.18362
Intel Core i7-8650U CPU 1.90GHz (Kaby Lake R), 1 CPU, 8 logical and 4 physical cores
[Host]: .NET Framework 4.7.2 (CLR 4.0.30319.42000), 32bit LegacyJIT-v4.8.3815.0
Windows Container: Quick refresh
Windows Server containers provide application isolation through process and namespace isolation technology. That is often referred to as process-isolated containers. A Windows Server container shares a kernel with the container host and all containers running on the host. These process-isolated containers don’t provide a hostile security boundary and shouldn’t be used to isolate untrusted code. Because of the shared kernel space, these containers require the same kernel version and configuration.
However, windows containers also provide a different type of isolation – called Hyper-V isolation. Hyper-V isolation expands on the isolation provided by Windows Server containers by running each container in a highly optimized virtual machine.
In this configuration, the container host doesn’t share its kernel with other containers on the same host. These containers are designed for hostile multi-tenant hosting with the same security assurances of a virtual machine. Since these containers don’t share the kernel with the host or other containers on the host, they can run kernels with different versions and configurations (within supported versions). For example, all Windows containers on Windows 10 use Hyper-V isolation to utilize the Windows Server kernel version and configuration.
Running a container on Windows with or without Hyper-V isolation is a runtime decision. We can initially create the container with Hyper-V isolation, and then later at runtime choose to run it as a Windows Server container instead.
I have run the IPC stack for each technology in three different setups.
Bare metal (running on my windows 10 client)
Two containers (server and client) running in Hyper-V isolation (–isolation=hyperv)
Two containers (server and client) running in Process isolation (–isolation=process)
IPC – by its nature is a non-deterministic operation. Hence, I wanted to measure and focus on latencies in my investigation instead of throughputs. I created some IPC handshake applications that exchanges approximately 1 KB of message from client to server. I ran it in different frequencies (>10000 times) and measured the percentiles.
TCP channel is probably the most commonly used binding in WCF applications. Here I have the simple WCF server and client that sends some bytes over the wire. TCP is a connection-based, stream-oriented delivery service with end-to-end error detection and correction. Connection-based means that a communication session between hosts is established before exchanging data. A host is any device on a TCP/IP network identified by a logical IP address.
The sample hosts a TCP server and waits for clients to connect. Once the client is connected, client sends 1KB bytes to the server for a n number of times.
We can see that it adds latency compare to Bare-metal. Let’s run it on process isolation mode:
That improved a lot, almost as bare-metal.
Like many RPC systems, gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. By default, gRPC uses protocol buffers as the Interface Definition Language (IDL) for describing both the service interface and the structure of the payload messages. It is possible to use other alternatives if desired.
I have written a similar sample project as I did for TCP channel above but this time both the server and client uses gRPC for messaging.
Lets run the same exercise with gRPC.
What I see is, the process isolation is pretty darn good compare to Hyper-V isolation.
Web Sockets (over HTTPS) gives a easy programming model for network communication. The WebSocket API is an advanced technology that makes it possible to open a two-way interactive communication session between the user’s browser and a server. With this API, you can send messages to a server and receive event-driven responses without having to poll the server for a reply.
I didn’t write or programmed web socket API directly though, I have used the SignalR self-hosting to do that.
In this exercise I created the same messaging with web sockets.
Unix Domain Sockets
Like I have mentioned above, Linux and .net core was not my option for this exercise. However, I couldn’t resist give it a shot running the same messaging over an Unix-domain-socket on Linux kernel.
A Unix domain socket or IPC socket is a data communications endpoint for exchanging data between processes executing on the same host operating system. Valid socket types in the UNIX domain are: SOCK_STREAM (compare to TCP), for a stream-oriented socket; SOCK_DGRAM (compare to UDP), for a datagram-oriented socket that preserves message boundaries (as on most UNIX implementations, UNIX domain datagram sockets are always reliable and don’t reorder datagrams); and SOCK_SEQPACKET (compare to SCTP), for a sequenced-packet socket that is connection-oriented, preserves message boundaries, and delivers messages in the order that they were sent. The Unix domain socket facility is a standard component of POSIXoperating systems.
Here’s what I get when running the similar messaging that leverages Unix domain sockets:
That’s blazing fast! Sadly I couldn’t use it for my purpose.
Putting all the numbers into a chart, I get this:
Disclaimer 1: Bench marking is difficult – it has so many moving factors to get everything right. I wouldn’t put any conclusive statement on it, like certain IPC technique is faster than other. But the source codes are included and you can run it on your environment and make your one judgement.
Disclaimer 2: Another interesting technology I needed to try out was windows named-pipes. The source code is in the same repository, but I couldn’t get it to work while sharing between containers. I will update the post once I have some progress there.
All remarks/questions are always welcome, Thanks for reading.
I have a tiny Kubernetes cluster. I run some workload there, some are useful, others are just try-out, fun stuffs. I have few services that need to talk to each other. I do not have a lot of traffic to be honest, but I sometimes curiously run Apache ab to simulate load and see how my services perform under stress. Until very recently I was using a messaging (basically a pub-sub) pattern to create reactive service-to-service communication. Which works great, but often comes with a latency. I can only imagine, if I were to run these service to service communication for a mission critical high-traffic performance-driven scenario (an online game for instance), this model won’t fly well. There comes the need for a service-to-service communication pattern in cluster.
What’s big deal? We can have REST calls between services, even can implement gRPC for that matter. The issue is things behaves different at scale. When many services talks to many others, nodes fail in between, network address of PODs changes, new PODs show up, some goes down, figuring out where the service sits becomes quite a challenging task.
Then Kubernetes comes to rescue, Kubernetes provides “service”, that gives us service discovery out of the box. Which is awesome. Not all issues disappeared though. Services in a cluster need fault-tolerances, traceability and most importantly, “observability”. Circuit-breakers, retry-logics etc. implementing them for each service is again a challenge. This is exactly the Service Mesh addresses.
Service mesh is an approach to operating a secure, fast and reliable microservices ecosystem. It has been an important steppingstone in making it easier to adopt microservices at scale. It offers discovery, security, tracing, monitoring and failure handling. It provides these cross-functional capabilities without the need for a shared asset such as an API gateway or baking libraries into each service. A typical implementation involves lightweight reverse-proxy processes, aka sidecars, deployed alongside each service process in a separate container. Sidecars intercept the inbound and outbound traffic of each service and provide cross-functional capabilities mentioned above.
Some of us might remember Aspect Oriented programming (AOP) – where we used to separate cross cutting concerns from our core-business-concerns. Service mesh is no different. They isolate (in a separate container) these networking and fault-tolerance concerns from the core-capabilities (also running in container).
There are quite several service mesh solutions out there – all suitable to run in Kubernetes. I have used earlier Envoy and Istio. They work great in Kubernetes as well as VM hosted clusters. However, I must admit, I developed a preference for Linkerd since I discovered it. Let’s briefly look at how Linkerd works. Imagine the following two services, Service A and Service B. Service A talks to Service B.
When Linkerd installed, it works like an interceptor between all the communication between services. Linkerd uses sidecar patternto proxy the communication by updating the KubeProxyIP Table.
Linkerd implants two sidecar containers in our PODs. The initcontainer configures the IP table so the incoming and outgoing TCP traffics flow through the Linkerd Proxy container. The proxy container is the data plane that does the actual interception and all the other fault-tolerance goodies.
Primary reason behind my Linkerd preferences are performance and simplicity. Ivan Sim has done performance benchmarking with Linkerd and Istio:
Both the Linkerd2-meshed setup and Istio-meshed setup experienced higher latency and lower throughput, when compared with the baseline setup. The latency incurred in the Istio-meshed setup was higher than that observed in the Linkerd2-meshed setup. The Linkerd2-meshed setup was able to handle higher HTTP and GRPC ping throughput than the Istio-meshed setup.
With this IaC – we can run Terraform apply to provision our AKS cluster in Azure.
Let’s create a pipeline for the service deployment. The easiest way to do that is to create a service connection to our AKS cluster. We go to the project settings in Azure DevOps project, pick Service connections and create a new service connection of type “Kubernetes connection”.
We will create a pipeline that installs Linkerd into the AKS cluster. Azure Pipeline now offers “pipeline-as-code” – which is just an YAML file that describes the steps need to be performed when the pipeline is triggered. We will use the following pipeline-as-code:
We can at this point trigger the pipeline to install Linkerd into the AKS cluster.
Deployment of PODs and services
Let’s create another pipeline as code that deploys all the services and deployment resources to AKS using the following Kubernetes manifest file:
In Azure Portal we can already see our services running:
Also in Kubernetes Dashboard:
We have got our services running – but they are not really affected by Linkerd yet. We will add another step into the build pipeline to tell Linkerd to do its magic.
Next thing, we trigger the pipeline and put some traffic into the service that we have just deployed. The emoji service is simulating some service to service invocation scenarios and now it’s time for us to open the Linkerd dashboard to inspect all the distributed traces and many other useful matrix to look at.
We can also see kind of an application map – in a graphical way to understand which service is calling who and what is request latencies etc.
Even fascinating, Linkerd provides some drill-down to the communications in Grafana Dashboard.
I have enjoyed a lot setting it up and see the outcome and wanted to share my experience with it. If you are looking into Service Mesh and read this post, I strongly encourage to give Linkerd a go, it’s awesome!
Google Chrome please! (I’ve not tested on other browsers yet)
In recent years I have spent fair amount of time in design and implementation of Infrastructure as code in larger enterprise context. Terraform seemed to be a tool of choice when it comes to preserve the uniformity in Infrastructure as code targeting multiple cloud providers. It is rapidly becoming a de facto choice for creating and managing cloud infrastructures by writing declarative definitions. It’s popular because the syntax of its files is quite readable and because it supports several cloud providers while making no attempt to provide an artificial abstraction across those providers. The active community will add support for the latest features from most cloud providers.
However, rolling out Terraform in many enterprises has its own barrier to face. Albeit the syntax (HCL) is neat, but not every developers or Infrastructure operators in organizations finds it easy. There’s a learning curve and often many of us lose momentum discovering the learning effort. I believe if we could make the initial ramp-up easier more people would play with it.
That’s one of my motivation for this post, following is the other one.
Blazor meets Terraform
Lately I was learningBlazor – the new client-side technology from Microsoft. Like many others, I find one effective way learning a new technology by creating/building solution to a problem. I have decided to build a user interface that will help creating terraform scripts easier. I will share my journey in this post.
Resource Discovery in Terraform Providers
Terraform is powerful for its providers. You will find Terraform providers for all major cloud providers (Azure, AWS, Google etc.). The providers then allow us to define “resource” and “data source” in Terraform scripts. These resource and data source have arguments and attributes that one must know while creating terraform files. Luckily, they are documented nicely in Terraform site. However, it still requires us to jump back and forth to the documentation site and terraform file editor (i.e. VSCode).
To make this experience easier, I wrote a crawler application that downloads the terraform providers (I am doing it for Azure, AWS and google for now) and discovers the attributes and arguments for each and every resource and data source. I also try to extract the documentation for every attributes and arguments from the terraform documentation site with a layman parsing (not 100% accurate but works for majority. Something I will improve soon).
This process generates JSON structure for each resource and data source, enriches them with the documentation and stores them in an Azure Blob Storage.
Building Infrastructure as code
Now that I have a structured data store with all resources and data sources for any terraform provider, I can leverage that building a user interface on top of it. To keep things a bit organized, I started with a concept of “project”.
I can start by creating a project (well, it can be a product too, but let’s not get to that debate). Project is merely a logical boundary here.
Within a project I can create Blueprint(s). Blueprint(s) are the entity that retains the elements of the infrastructure that we are aiming to create. For instance, a Blueprint targets to a Cloud provider (i.e. Azure). Then I can create the elements (resource and data sources) within the blueprint (i.e. Azure Web App, Cosmos DB etc.).
Blueprints keeps the base structure of all the infrastructure elements. It allows defining variables (plain and simple terraform variables) so the actual values can vary in different environments (dev, test, pre-production, production etc.).
Once I am happy with the blueprint, I can download them as a zip – that contains the terraform scripts (main.tf and variable.tf). That’s it, we have our infrastructure as code in Terraform. I can execute them on a local development machine or check them in to source control – whatever I prefer.
One can stop here and keep using the blueprint feature to generate Infrastructure as code. That’s what it is for. However, the next features are just to make the overall experience of running terraform a bit easier.
Next to blueprint, we can create as many environments we want. Again, just a logical entity to keep isolation of actual deployment for different environments.
Deployment entity is the glue that ties a blueprint to a specific environment. For instance, I can define a blueprint for “order management” service (or micro-service maybe?), create an environment as “test” and then create a “deployment” for “order management” on “test”. This is where I can define constant values to the blueprint variable that are specific to the test environment.
Perhaps the most important aspect the deployment entity holds is the terraform state management. Terraform must store state about your managed infrastructure and configuration. This state is used by Terraform to map real world resources to your configuration, keep track of metadata, and to improve performance for large infrastructures. This state is stored by default in a local file named “terraform.tfstate”, but it can also be stored remotely, which works better in a team environment. Defining the state properties (varies in different cloud providers) in deployment entity makes the remote state management easier – specifically in team environment. It will configure the remote state to the appropriate remote backend. For instance, when the blueprint cloud provider is set to Azure, it will configure Azure Storage account as terraform state remote backend, for AWS it will pick S3 automatically.
Once we have deployment entity configured, we can directly from the user interface run “terraform plan”. The terraform plan command creates an execution plan. Unless explicitly disabled, it performs a refresh, and then determines what actions are necessary to achieve the desired state specified in the blueprint. This command is a convenient way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state. For example, terraform plan might be run before committing a change to version control, to create confidence that it will behave as expected.
The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan. Like “plan”, the “apply” command can also be issued directly from the user interface.
Terraform plan and apply both are issued in an isolated docker container and the output is captured and displayed back to the user interface. However, there’s a cost associated running docker containers on cloud, therefore, it’s disabled in the public site.
It was fun to write a tool like this. I recommend you give it a go. Especially if you are stepping into Terraform. It can also be helpful for experienced Terraform developers – specifically with the on-screen documenation, type inferance and discovery features.
Some features, I have working progress:
Ability to define policy for each resources and data types
Lately I am learning ASP.net Blazor – the relatively new UI framework from Microsoft. Blazor is just awesome – the ability to write c# code both in server and client side is extremely productive for .net developers. From Blazor documentations:
I am using GitHub as my source repository for free (in a private repository). Today wanted to create a pipeline that will continuously deploy my Blazor app to the storage account. Azure Pipelines seems to have pretty nice integration with GitHub and it’s has a free tier as well . If either our GitHub repository or pipeline is private, Azure Pipeline still provide a free tier. In this tier, one can run one free parallel job that can run up to 60 minutes each time until we’ve used 1800 minutes per month. That’s pretty darn good for my use case.
I also wanted to build the project many times in my local machine (while developing) in the same way it gets built in the pipeline. Being a docker fan myself, that’s quite no-brainer. Let’s get started.
I have performed few steps before I ran after the pipeline – that are beyond the scope of this post.
I have created a docker file that will build the app, run unit tests and if all goes well, it will publish the app in a folder. All of these are standard dotnet commands.
Once, we have the application published in a folder, I have taken the content of that folder to a Azure CLI docker base image (where CLI is pre-installed) and thrown away the rest of the intermediate containers.
Here’s our docker file:
The docker file expects few arguments (basically the service principal ID, the password of the service principal and the Azure AD tenant ID – these are required for Azure CLI to sign-in to my Azure subscription). Here’s how we can build this image now:
Azure Pipeline as code
We have now the container, time to run it every time a commit has been made to the GitHub repository. Azure Pipeline has a yaml format to define pipeline-as-code – which is another neat feature of Azure Pipelines.
Let’s see how the pipeline-as-code looks like:
I have committed this to the same repository into the root folder.
Creating the pipeline
We need to login to Azure DevOps and create a project (if there’s none). From the build option we can create a new build definition.
The steps to create build definition is very straightforward. It allows us to directly point to a GitHub repository that we want to build.
Almost there. We need to supply the service principal ID, password, tenant ID and storage account names to this pipeline – because both our docker file and the pipeline-as-code expected them as dependencies. However, we can’t just put their values and commit them to GitHub. They should be kept secret.
Azure Pipeline Secret variables
Azure Pipeline allows us to define secret variable for a pipeline. We need to open the build definition in “edit” mode and then go to the top-right ellipses button as below:
Now we can define the values of these secret and keep them hidden (there’s a lock icon there).
That’s all what we need. Ready to deploy the code to Azure storage via this pipeline. If we now go an make a change in our repository it will trigger the pipeline and sure enough it will build-test-publish-deploy to Azure storage as a Static SPA.