Summary
This is a part of article series introducing Azure IoT Edge integration template - azure-iot-edge-integration-test-template. In this Part.1, I will explain the entire pipeline including Infrastructure as Code and integration test. Part.2 would provide the information about each IoT Edge module, and Part.3 share about the IoT Edge manifest generator.
TOC
- Overview
- Architecture
- Test steps
- Infrastructure deployment
- Setup and installation
- Code deployment and test execution
Overview
When you have multiple Azure IoT Edge modules on an edge device and want to update codes of one of those modules, you can make sure the code quality by implementing linter and unit tests, but it is difficult to validate communications among modules. That is why executing integration tests every time you update the software.
In this template, you can execute integration tests of Azure IoT Edge modules on Test Environment on Azure Virtual Machine. The test procedure is all automated by Azure Pipelines. By taking advantage of this template, you deploy and execute integration tests before deploying your code to the edge device.
Architecture
This template includes six IoT Edge sample modules - FileGenerator, FileUpdater, FileUploader, IothubConnector, WeatherObserver, and LocalBlobStorage. Details of IoT Edge modules are explained in Azure IoT Edge Integration Test template - Part.2.
Test steps
You need to execute three steps, 1) Infrastructure deployment, 2) Setup and installation, and 3) Code deployment and test execution. Everything to know to run this template is described in Getting-started, but I am going instruct details and important points for those pipelines.
Infrastructure deployment
Azure resources needed for this template are defined in main.bicep and you can run the IaC(Infrastructure as Code) pipeline of iac.yml on Azure Pipelines.
Network Security Group - inbound Port 22
main.bicep defines a Network Security Group inbound rule that opens Port 22. This is necessary for you to access the VM through SSH when installing Azure IoT Edge in the next step and for Azure Pipelines agent to set up VM's directory and grant directory access permissions in test-prep.yml.
properties: {
securityRules: [
{
name: 'SSH'
properties: {
protocol: 'Tcp'
sourcePortRange: '*'
destinationPortRange: '22'
sourceAddressPrefix: '*'
destinationAddressPrefix: '*'
access: 'Allow'
priority: 100
direction: 'Inbound'
}
}
]
}
VM domain name label
Setting up the domain name label for the VM is important because IP Address is dynamically allocated and Azure Pipelines agent accesses with VM Service Connection descrited in Create SSH Service Connection.
Host name should be edge-{BASE_NAME}.{LOCATION}.cloudapp.azure.com
, which is defined in main.bicep.
var dns_label = 'edge-${base_name}'
resource PublicIp 'Microsoft.Network/publicIPAddresses@2021-05-01' = {
name: public_ip_name
location: location
sku: {
name: 'Basic'
}
properties: {
publicIPAllocationMethod: 'Dynamic'
publicIPAddressVersion: 'IPv4'
dnsSettings: {
domainNameLabel: dns_label
}
idleTimeoutInMinutes: 4
}
}
IoT Hub consumer group
It is better to create consumer groups of IoT Hub. If you want to consume messages on the IoT Hub through another tool such as Azure IoT Hub Explorer that would cause errors of the integration test. main.bicep deploys the consumer group that dedicates to Azure Pipelines agent executing integration tests.
param iothub_cg_name string
resource IoTHubConsumerGroup 'Microsoft.Devices/IotHubs/eventHubEndpoints/ConsumerGroups@2021-07-02' = {
name: '${IoTHub.name}/events/${iothub_cg_name}'
properties:{
name: iothub_cg_name
}
}
Setup and installation
VM Service Connection
In this template, it uses SSH service connection. It is possible you use bash commands like ssh testuser@edge-{BASE_NAME}.{LOCATION}.cloudapp.azure.com
. By combining retryCountOnTaskFailure
of Azure Pipelines task, you can handle unstable connection errors of SSH.
Blob Contributor Role
You need to set up manually Storage Blob Data Contributor
of Azure built-in roles of Azure RBAC, which is tied to your Azure Subscription so Azure Pipelines agent can generate SAS token of Blob Storage. You need Azure Subscription Owner or User Access Administrator role.
az role assignment create `
--role "Storage Blob Data Contributor" `
--assignee {Object ID of Azure Service Connection} `
--scope "/subscriptions/{Azure Subscription ID}/resourceGroups/rg-{BASE_NAME}/providers/Microsoft.Storage/storageAccounts/st{BASE_NAME}"
Code deployment and test execution
edge-module.yml
Call this template with each IoT Edge module. This builds and pushes container images to Azure Container Registry.
Use not Azure Pipeline Docker task but docker commands because it does not need to create Docker Registry service connection manually. The process that Azure Pipelines extracts Azure Container Registry key, buil, and push is all automated.
acrkey=$(az acr credential show --name $(ACR_NAME) --query passwords[0].value -o tsv)
cd ${{ parameters.dockerfileDirectory }}
docker login -u $(ACR_NAME) -p $acrkey $(ACR_NAME).azurecr.io
docker build --rm -f Dockerfile -t $(ACR_NAME).azurecr.io/${{ parameters.repositoryName }}:$(Build.BuildNumber) .
docker push $(ACR_NAME).azurecr.io/${{ parameters.repositoryName }}:$(Build.BuildNumber)
test-prep.yml
- Call this template with IoT Edge modules as parameters. The parameters are used for iterative tasks that check if module images exist in Azure Container Registry and each module is running on IoT Edge runtime.
- template: ./templates/test-prep.yml
parameters:
azureSvcName: $(AZURE_SVC_NAME)
vmSshSvcName: $(VM_SVC_NAME)
EdgeImages:
module1:
name: IothubConnector
repository: iothub-connector
tag: $(Build.BuildNumber)
module2:
name: WeatherObserver
repository: weather-observer
tag: $(Build.BuildNumber)
module3:
name: FileGenerator
repository: file-generator
tag: $(Build.BuildNumber)
module4:
name: FileUploader
repository: file-uploader
tag: $(Build.BuildNumber)
module5:
name: FileUpdater
repository: file-updater
tag: $(Build.BuildNumber)
- SSH task
- Remove
/edge
directory to refresh the leftover of past test executions. - Grant read, write, and execute permissions to the host machine directory. Azure IoT Edge Hub with
UID 1000
and Azure IoT Edge local blob storage withuser ID 11000
anduser group ID 11000
. - Host system permissions
- Granting directory access to container user on Linux
- Install the tree command so it can show the host machine directory in the log.
- Restart Azure IoT Edge runtime because you deleted the directory that the current modules implement bind mount. To refresh the connection between modules and directory, you need to restart the runtime, otherwise it would cause errors.
- Remove
if [ -d "/edge" ]
then
sudo rm -r /edge
fi
sudo mkdir -p $(FILE_UPLOADER_DIR)
sudo chown -R 1000 $(FILE_UPLOADER_DIR)
sudo chmod -R 700 $(FILE_UPLOADER_DIR)
sudo mkdir -p $(FILE_UPDATER_DIR)
sudo chown -R 1000 $(FILE_UPDATER_DIR)
sudo chmod -R 700 $(FILE_UPDATER_DIR)
sudo mkdir -p $(LOCAL_BLOB_STORAGE_DIR)
sudo chown -R 11000:11000 $(LOCAL_BLOB_STORAGE_DIR)
sudo chmod -R 700 $(LOCAL_BLOB_STORAGE_DIR)
sudo apt-get install tree
tree /edge
sudo iotedge system restart
test-execution.yml
- Use acynchronous bash method
&
so the Azure Pipelines agent sends a direct method request to IoT Hub and at the same time listens to messages on IoT Hub sent from IoT Edge modules.--timeout
is currently set30
sec. Sometimes the response from IoT Edge is slow and they do not respond and cause errors. 30 seconds is probably is good time to wait. If it is longer than 30 seconds, something is going wrong on Edge modules.
az iot hub invoke-module-method --hub-name $(IOTHUB_NAME) --device-id $(IOTHUB_DEVICE_ID) --module-id IothubConnector --method-name request_weather_report --method-payload '{"city": "Tokyo"}' &
testResult=$(az iot hub monitor-events --hub-name $(IOTHUB_NAME) --device-id $(IOTHUB_DEVICE_ID) --module-id IothubConnector --cg $(IOTHUB_CONSUMER_GROUP) --timeout 30 -y)
test-cleanup.yml
- Remove all directory but keep the blob container
weather
az storage blob directory delete --account-name $(STORAGE_ACCOUNT_NAME) --container-name $(BLOB_CONTAINER_NAME) --directory-path $(TEST_ORGANIZATION_NAME) --recursive
Top comments (0)