Hey everyone, today we will build a complete CICD pipeline using Azure Cloud. This article will show step by step how to build our project using Docker, Azure Container registry, Azure Kubernetes Service (AKS) and for DB we will use Azure SQL Database.
To make the work easier and documented. I chose to use Terraform to provision the infrastructure. Our infrastructure will include:
- Azure Container Registry
- Azure Kubernetes Service
- Azure SQL Database with sample database
Coding our Infrastructure.
We already have an idea of what we want to create. So the first step will be for us to write down the terraform code that we will use to create the infrastructure.
Let us take a look at our terraform code.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65"
}
random = {
source = "hashicorp/random"
version = "3.1.0"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "emmilly-rg" {
name = "emmilly_mssql_acr_aks_rg"
location = "South Africa North"
}
resource "azurerm_container_registry" "emmilly-acr" {
name = "emmillyacr"
sku = "Premium"
resource_group_name = azurerm_resource_group.emmilly-rg.name
location = azurerm_resource_group.emmilly-rg.location
}
resource "azurerm_kubernetes_cluster" "emmilly-k8s-cluster" {
name = "emmilly-aks"
location = azurerm_resource_group.emmilly-rg.location
resource_group_name = azurerm_resource_group.emmilly-rg.name
dns_prefix = "emmilly-dns"
public_network_access_enabled = true
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "standard"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
resource "azurerm_role_assignment" "enablePulling" {
principal_id = azurerm_kubernetes_cluster.emmilly-k8s-cluster.kubelet_identity[0].object_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.emmilly-acr.id
skip_service_principal_aad_check = true
}
resource "azurerm_mssql_server" "test-server" {
name = "sqltest-server-emmilly"
resource_group_name = azurerm_resource_group.emmilly-rg.name
location = azurerm_resource_group.emmilly-rg.location
version = "12.0"
administrator_login = "emmilly"
administrator_login_password = "emily@256"
minimum_tls_version = "1.2"
}
resource "azurerm_mssql_database" "test-db" {
name = "sqltest"
server_id = azurerm_mssql_server.test-server.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
read_scale = false
sku_name = "S0"
zone_redundant = false
sample_name = "AdventureWorksLT"
tags = {
dev = "Production"
}
}
output "client_certificate" {
value = azurerm_kubernetes_cluster.emmilly-k8s-cluster.kube_config.0.client_certificate
}
output "kube_config" {
Let us break down what this terraform file will create for us
First we declared the provider and the provider vrsion that we want to use.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65"
}
random = {
source = "hashicorp/random"
version = "3.1.0"
}
}
}
provider "azurerm" {
features {}
}
The next part of our code declares the resources that we want terraform to create for us and we provide the different tags and names we want for them as well.
resource "azurerm_resource_group" "emmilly-rg" {
name = "emmilly_mssql_acr_aks_rg"
location = "South Africa North"
}
resource "azurerm_container_registry" "emmilly-acr" {
name = "emmillyacr"
sku = "Premium"
resource_group_name = azurerm_resource_group.emmilly-rg.name
location = azurerm_resource_group.emmilly-rg.location
}
resource "azurerm_kubernetes_cluster" "emmilly-k8s-cluster" {
name = "emmilly-aks"
location = azurerm_resource_group.emmilly-rg.location
resource_group_name = azurerm_resource_group.emmilly-rg.name
dns_prefix = "emmilly-dns"
public_network_access_enabled = true
network_profile {
network_plugin = "kubenet"
load_balancer_sku = "standard"
}
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_D2_v2"
}
identity {
type = "SystemAssigned"
}
tags = {
Environment = "Production"
}
}
resource "azurerm_role_assignment" "enablePulling" {
principal_id = azurerm_kubernetes_cluster.emmilly-k8s-cluster.kubelet_identity[0].object_id
role_definition_name = "AcrPull"
scope = azurerm_container_registry.emmilly-acr.id
skip_service_principal_aad_check = true
}
resource "azurerm_mssql_server" "test-server" {
name = "sqltest-server-emmilly"
resource_group_name = azurerm_resource_group.emmilly-rg.name
location = azurerm_resource_group.emmilly-rg.location
version = "12.0"
administrator_login = "emmilly"
administrator_login_password = "emily@256"
minimum_tls_version = "1.2"
}
resource "azurerm_mssql_database" "test-db" {
name = "sqltest"
server_id = azurerm_mssql_server.test-server.id
collation = "SQL_Latin1_General_CP1_CI_AS"
license_type = "LicenseIncluded"
read_scale = false
sku_name = "S0"
zone_redundant = false
sample_name = "AdventureWorksLT"
tags = {
dev = "Production"
}
}
The resources to be created include
- Azure Resource Group
- Azure Container registry
- Azure Kubernetes Cluster
- Azure Role Assignment
- Azure MSSQL Server
- Azure MSSQL Database
Then at the bottom of our code, we declared the output block basically so that terraform will print or output some information of our just created infrastructure, on our terminal.
output "client_certificate" {
value = azurerm_kubernetes_cluster.emmilly-k8s-cluster.kube_config.0.client_certificate
}
output "kube_config" {
value = azurerm_kubernetes_cluster.emmilly-k8s-cluster.kube_config_raw
sensitive = true
}
Terraform will output :
1 - The client certificate of our newly created Kubernetes cluster
2 - Our Kubernetes Kube config file
Creating the Infrastructure
Now that our Terraform file is written. We can go ahead and run the terraform commands:
To initialiaze terraform
terraform init
Let us take a look at the output
Next we need to validate or check if our code is good to go according to standards.
terraform validate
Now that we have done all the checks, we can go ahead and request terraform to do a plan of what we want it to create for us, based on the code we have provided above.
terraform plan
Once we have a plan, the next step is to apply the plan. In other words, request terraform to use our already included credentials
to access our Azure account and create the resources.
Please note I will use the
--auto-approve
option of the terraform apply command. Normally you don't need to add. When you do not add it, it checks all the code and requires your confirmation before applying. In my case, I already know I want it to proceed so I added that bit.
terraform apply --auto-approve
Now that we can see that the terraform apply
is complete. Let us head over to our Azure account.
Awesome! Our infrastructure has been created.
For our next step, let us add a docker file to our NodeJs application,
1 - build a docker image from it.
docker build . -t shecloud
2 - then tag
it
docker tag shecloud <loginservername/shecloud>
Now let us go ahead and check our local docker repository
docker images
Now that we have confirmed that our docker images are present, our next step is to login to our docker account
docker login <login server name>
Enable the button at the bottom to see your server username and password
Good job so far!!!!
Let us continue.
We want to see our registry. So let us check our registry login server.
Next, we will upload or push our docker image that we created before in our local machine, over.
docker push shecloud <loginservername>/shecloud
We have just uploaded our docker image successfully, next let us go check it out in our ACR (Azure Container Registry).
If you have followed so far, great job!.
Deploying to Kubernetes
Now that we have all that we need for our deployement, let u login to our Azure locally.
az login
# our next command is to set our subscription
az account set --subscription xxxxxx-xxxx-xxxx-xxxxxx
# our final bash command is to get the credentials needed.
az aks get-credentials --resource-group <resource group nae> --name <aks name>
Now we have access to our kubernetes cluster locally. Next, let us check the nodes in out cluster.
kubectl get nodes
Let us write our Kubernetes manifest that will deploy our application to our cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-shecloud
spec:
replicas: 1
selector:
matchLabels:
app: azure-shecloud
template:
metadata:
labels:
app: azure-shecloud
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-shecloud
image: emmillyacr.azurecr.io/shecloud:latest
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 3000
name: azure-shecloud
---
apiVersion: v1
kind: Service
metadata:
name: azure-shecloud
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: azure-shecloud
---
We have everything that we need for deploying our application to our cluster. Now let us run our kubernetes command to run our deployment.
kubectl apply -f node_sql.yaml
Success! Next .... we need to check the external IP of our app.
kubectl get svc
We are almost at the end ... please one last push.
Next step, we will access our portal and allow ips access.
Make sure to tick the box to allow services access server.
Click save to save the changes.
Lastly, now When we check the External ip of our Load balancer, 20.87.94.72:3000
.
Yay!!!!!!!!!!!!!! We succeeded!
Top comments (0)