This is a continuation of my first post. In this post, I'll touch on Docker Hub, Terraform and Security.
Security
After setting up Aikido, I had a list of vulnerabilities to go through. I won't touch on everything but will focus on the ones I found the most interesting. Let's start with two dependency vulnerabilities.
The stdlib
package is used by Docker CE CLI. I couldn't resolve this, so I contacted Docker Support to report the vulnerability. They didn't provide an ETA for its resolution since it's a third-party package.
The cryptography
package is used by Azure CLI and required me to create a GitHub issue.
The final vulnerability came from the self-hosted build agent running in root mode. This was done because of a permission issue I couldn't figure out. Now that I had time to focus on the issue, it ended up being a simple fix.
Finding a free or open-source IAST tool was harder than I expected. I came across DongTai, but it hasn’t been updated since November 2023. Contrast Community Edition is the only free IAST tool. I'm still waiting to hear back from Contrast Security.
Terraform
Setting up Terraform was straightforward thanks to the well-written documentation. Within 5 minutes, I could run terraform init
.
Until now, everything had been manually created. Trying to remember every setting wasn't easy. To help me out, I turned to aztfexport and the Azure template export feature. After a few hours I had a working Terraform setup.
Setting up a pipeline was the next step. The first thing I did was update the self hosted build agent to include Terraform. Terraform can be installed during the pipeline but this adds unnecessary time to the entire process.
Then I turned my focus onto the state file by moving it to Azure Storage.
Eventually I was able to get a working pipeline. These are two issues I kept dealing with for a while.
- Terraform destroy would fail, cancelling the job did nothing as the process kept going. The fix was to kill the process within the Docker container. This led to a secondary issue, the state file became locked and subsequent jobs would fail to access the state file. The fix was to break the lease.
Example pipeline.yml
trigger:
- master
pool:
name: 'Self Hosted Build Agent'
variables:
TF_WORKING_DIRECTORY: $(tfWorkingDirectory)
BACKEND_RG: $(backendRG)
BACKEND_STORAGE: $(backendStorage)
BACKEND_CONTAINER: $(backendContainer)
BACKEND_KEY: $(backendKey)
jobs:
- job: Terraform_Deployment
displayName: 'Terraform Deployment'
steps:
- script: |
terraform init -backend-config="resource_group_name=$(BACKEND_RG)" \
-backend-config="storage_account_name=$(BACKEND_STORAGE)" \
-backend-config="container_name=$(BACKEND_CONTAINER)" \
-backend-config="key=$(BACKEND_KEY)"
displayName: 'Terraform Init'
condition: eq(variables['initEnabled'], 'true')
workingDirectory: $(TF_WORKING_DIRECTORY)
env:
ARM_CLIENT_ID: $(client_id)
ARM_CLIENT_SECRET: $(client_secret)
ARM_TENANT_ID: $(tenant_id)
ARM_SUBSCRIPTION_ID: $(subscription_id)
- script: terraform validate
displayName: 'Terraform Validate'
condition: eq(variables['validateEnabled'], 'true')
workingDirectory: $(TF_WORKING_DIRECTORY)
env:
ARM_CLIENT_ID: $(client_id)
ARM_CLIENT_SECRET: $(client_secret)
ARM_TENANT_ID: $(tenant_id)
ARM_SUBSCRIPTION_ID: $(subscription_id)
- script: |
terraform plan -out=tfplan \
-var="client_id=$(client_id)" \
-var="client_secret=$(client_secret)" \
-var="tenant_id=$(tenant_id)" \
-var="subscription_id=$(subscription_id)" \
-var="location=$(location)" \
-var="rg_name=$(rg_name)" \
-var="cosmosdb_name=$(cosmosdb_name)" \
-var="cosmosdb_table_name=$(cosmosdb_table_name)" \
-var="offer_type=$(offer_type)" \
-var="kind=$(kind)" \
-var="capabilities=$(capabilities)" \
-var="max_interval_in_seconds=$(max_interval_in_seconds)" \
-var="max_staleness_prefix=$(max_staleness_prefix)" \
-var="free_tier_enabled=$(free_tier_enabled)" \
-var="is_virtual_network_filter_enabled=$(is_virtual_network_filter_enabled)" \
-var="public_network_access_enabled=$(public_network_access_enabled)" \
-var="consistency_level=$(consistency_level)" \
-var="failover_priority=$(failover_priority)" \
-var="type=$(type)" \
-var="interval_in_minutes=$(interval_in_minutes)" \
-var="retention_in_hours=$(retention_in_hours)" \
-var="storage_redundancy=$(storage_redundancy)" \
-var="max_throughput=$(max_throughput)" \
-var="swa_name=$(swa_name)" \
-var="sa_name=$(sa_name)" \
-var="sa_account_tier=$(sa_account_tier)" \
-var="sa_access_tier=$(sa_access_tier)" \
-var="sa_allow_nested_items_to_be_public=$(sa_allow_nested_items_to_be_public)" \
-var="sa_replication_type=$(sa_replication_type)" \
-var="fa_app_sp_name=$(fa_app_sp_name)" \
-var="fa_app_sp_os_type=$(fa_app_sp_os_type)" \
-var="fa_app_sp_sku=$(fa_app_sp_sku)" \
-var="fa_name=$(fa_name)" \
-var="allowed_origins=$(allowed_origins)" \
-var="builtin_logging_enabled=$(builtin_logging_enabled)" \
-var="node_version=$(node_version)"
displayName: 'Terraform Plan'
condition: eq(variables['planEnabled'], 'true')
workingDirectory: $(TF_WORKING_DIRECTORY)
env:
ARM_CLIENT_ID: $(client_id)
ARM_CLIENT_SECRET: $(client_secret)
ARM_SUBSCRIPTION_ID: $(subscription_id)
ARM_TENANT_ID: $(tenant_id)
- script: terraform apply -auto-approve "tfplan"
displayName: 'Terraform Apply'
condition: eq(variables['applyEnabled'], 'true')
workingDirectory: $(TF_WORKING_DIRECTORY)
env:
ARM_CLIENT_ID: $(client_id)
ARM_CLIENT_SECRET: $(client_secret)
ARM_TENANT_ID: $(tenant_id)
ARM_SUBSCRIPTION_ID: $(subscription_id)
- script: |
terraform destroy -auto-approve \
-var="client_id=$(client_id)" \
-var="client_secret=$(client_secret)" \
-var="tenant_id=$(tenant_id)" \
-var="subscription_id=$(subscription_id)" \
-var="location=$(location)" \
-var="rg_name=$(rg_name)" \
-var="cosmosdb_name=$(cosmosdb_name)" \
-var="cosmosdb_table_name=$(cosmosdb_table_name)" \
-var="offer_type=$(offer_type)" \
-var="kind=$(kind)" \
-var="capabilities=$(capabilities)" \
-var="max_interval_in_seconds=$(max_interval_in_seconds)" \
-var="max_staleness_prefix=$(max_staleness_prefix)" \
-var="free_tier_enabled=$(free_tier_enabled)" \
-var="is_virtual_network_filter_enabled=$(is_virtual_network_filter_enabled)" \
-var="public_network_access_enabled=$(public_network_access_enabled)" \
-var="consistency_level=$(consistency_level)" \
-var="failover_priority=$(failover_priority)" \
-var="type=$(type)" \
-var="interval_in_minutes=$(interval_in_minutes)" \
-var="retention_in_hours=$(retention_in_hours)" \
-var="storage_redundancy=$(storage_redundancy)" \
-var="max_throughput=$(max_throughput)" \
-var="swa_name=$(swa_name)" \
-var="sa_name=$(sa_name)" \
-var="sa_account_tier=$(sa_account_tier)" \
-var="sa_access_tier=$(sa_access_tier)" \
-var="sa_allow_nested_items_to_be_public=$(sa_allow_nested_items_to_be_public)" \
-var="sa_replication_type=$(sa_replication_type)" \
-var="fa_app_sp_name=$(fa_app_sp_name)" \
-var="fa_app_sp_os_type=$(fa_app_sp_os_type)" \
-var="fa_app_sp_sku=$(fa_app_sp_sku)" \
-var="fa_name=$(fa_name)" \
-var="allowed_origins=$(allowed_origins)" \
-var="builtin_logging_enabled=$(builtin_logging_enabled)" \
-var="node_version=$(node_version)"
displayName: 'Terraform Destroy'
condition: eq(variables['destroyEnabled'], 'true')
workingDirectory: $(TF_WORKING_DIRECTORY)
env:
ARM_CLIENT_ID: $(client_id)
ARM_CLIENT_SECRET: $(client_secret)
ARM_TENANT_ID: $(tenant_id)
ARM_SUBSCRIPTION_ID: $(subscription_id)
Docker Hub
This was the final step to complete and I'd be done with The Cloud Resume Challenge. I was going to use Auzre Container Registry but it's not free and Docker Hub is. The setup process was straight forward.
- Create a Docker account.
- Setup Docker Hub (this required creating a private repository).
- Followed Use Azure Pipelines to build and push container images to registries.
- Updated the Docker Powershell script to reference the correct Docker image
Example pipeline.yml
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
variables:
repositoryName: 'amaniel/self-hosted-build-agent-linux'
dockerfile: 'EXAMPLE'
dockerTag: 'latest'
containerRegistry: 'EXAMPLE'
jobs:
- job: Docker_Build_Push
displayName: 'Build and Push Docker Image'
steps:
- task: Docker@2
displayName: Build Docker Image
inputs:
command: build
containerRegistry: ${{ variables.containerRegistry }}
repository: ${{ variables.repositoryName }}
dockerfile: $(Build.SourcesDirectory)/${{ variables.dockerfile }}
buildContext: $(Build.SourcesDirectory)
tags: ${{ variables.dockerTag }}
- task: Docker@2
displayName: Push Docker Image
inputs:
command: push
containerRegistry: ${{ variables.containerRegistry }}
repository: ${{ variables.repositoryName }}
tags: ${{ variables.dockerTag }}
Example docker.ps1
$AZP_URL = "EXAMPLE"
$AZP_CLIENTID = "EXAMPLE"
$AZP_CLIENTSECRET = "EXAMPLE"
$AZP_TENANTID = "EXAMPLE"
$AZP_POOL = "EXAMPLE"
$AZP_AGENT_NAME = "EXAMPLE"
$DOCKER_IMAGE = "amaniel/self-hosted-build-agent-linux"
$USER = "EXAMPLE"
$PAT= "EXAMPLE"
# Login, read and pull Docker image
docker login -u $USER -p $PAT
# Run Docker container
docker run -v /var/run/docker.sock:/var/run/docker.sock -e AZP_URL=$AZP_URL -e AZP_CLIENTID=$AZP_CLIENTID -e AZP_CLIENTSECRET=$AZP_CLIENTSECRET -e AZP_TENANTID=$AZP_TENANTID -e AZP_POOL=$AZP_POOL -e AZP_AGENT_NAME=$AZP_AGENT_NAME $DOCKER_IMAGE
If you've made it this far, thanks for reading this post. Feel free to check out my site at amanielgobezy.com.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.