The Veil
The first time I had to use terraform
was to create VM's on an openstack cloud.
It seemed so easy, all we had to do was call a command with given set of arguments and it would work its magic and spit out the VM's just as we wanted.
It could do further: maintaining the state of the created VM/VM's and show us when any of the tracked state fields changed.
It seemed so easy, all we had to do was call a command with given set of arguments and it would work its magic and spit out the VM's just as we wanted
Initially I had always associated what terraform could do to infrastructure, as in - creating VM's, setting up networks, volumes… and other infra related activities. I couldn't see pass this veil that I myself had created over this tool.
The Spark
In one of my later projects, my team had come across a very cliched problem statement. The client had a deployment platform. Lets just call it 'A'. While it was used significantly to deploy many applications to their production system, it was being deprecated in exchange for another platform that was built. Now we had to port all the applications that were being deployed by the previous platform 'A' to the new platform which we can call 'B'.
The number of applications were definitely not small and we had to keep in mind that the new platform would require the various teams handling the applications to change their deployment strategy to the new platform as well.
A very generic problem statement that many of us have faced before. Only this time, instead of doing it in a traditional, old fashioned way, we decided to try something new. I mentioned that the teams would have to change their deployment strategy, so we had to keep track of which applications were migrated to the new platform and which weren't. So we needed state and thus we decided to use terraform's inbuilt mechanism of handling state for our use case.
We needed state and thus we decided to use terraform's inbuilt mechanism of handling state for our use case.
Thus began our journey of trying to use terraform to migrate applications from one platform to another. A new terraform provider was to be built as a first step. Without knowing for sure if it would work, we started on it. The thought was on everybody's mind - 'Are we using a tool to do something it wasn't meant to do?'
The initial trepidation was soon replaced with the realization that it was us who had created this barrier over the tool. The implementation of every resource itself says that it uses 4 functions named - Create, Read, Update and Delete. And that meant that anything that does these operations; terraform can use and do the same.
Anything that does these operations; terraform can use and do the same.
Writing a custom provider
I will explain from a high level without going much into technical details what this entails.
What is a provider? Providers represent the platform that is going to serve us. Every provider in terraform takes some fields. These fields are usually configurations for the provider like what is required to contact the provider. For Eg.
provider "aws" {
region = "some-region"
shared_credentials_file = "creds required"
profile = "profile name"
}
These are required for the AWS provider. Similarly other providers will take their appropriate configurations. For us it was simple - Just the IP the new platform was on as that was our provider.
A provider can have many resources. The configurations of a provider are shared across all the resources.
What are resources?. Resources are what can be created by the provider or what a provider can provide. Eg. In AWS,
resource "aws_instance" "web" {
ami = "ami-id"
instance_type = "t1.micro"
}
For us, the resources were applications on 'B'. Every resource also takes some fields.
The Create function takes the fields of the resource, creates a JSON body and sends it to 'B'. The configurations to do this are shared through the provider. It also stores the resource fields in the state file.
The Create function makes a POST call to create the application
The Read function reads from 'B' using the configurations shared through the provider and changes the state file accordingly. This refreshes the state and checks if there any changes on the platform to the desired state in our terraform file.
The Read function makes a GET call to read the application
The Update function sends the changed resources as a JSON body to 'B' and updates the state file.
The Update function makes a PUT call to update the application
The Delete function was not needed for us since we were only migrating.
But by now I think you all would have got my point.
Underneath it all, the magic of terraform was nothing but CRUD operations.
The Possibilities
After this revelation, we all had so many ideas of how terraform can be used. There were ideas flying around like building a provider for zsh, bash, shell setup, machine setup…
Anything that does CRUD operations is open. Build a custom provider to implement the operations and you are ready to go.
I will be writing another topic on building custom providers with more technical details. Look out for it.
We also started looking if others had already stumbled upon this. We came across some really crazy stuff. Like this one - https://github.com/ndmckinley/terraform-provider-dominos. A terraform provider to order pizzas. LOL.
Just goes to show that for so many things out there, it is us, ourselves who block our own view of the horizons.
Well, I hope you all enjoyed this read. I just wanted to share that I had thought something about a tool and in actuality I was naive about it.
Maybe this sparked some new ideas for someone reading out there too.
Maybe this will remove the veil if it exists for some folks too.
Thanks for reading. Do post your ideas if you are willing to share. Any feedback on the content is welcome too.
Top comments (0)