DEV Community

daraymonsta
daraymonsta

Posted on • Updated on

A Teacher's Learning Log for the Cloud Resume Challenge

Key facts

  • End goal: Complete the Resume Challenge designed by Forrest Brazeal

  • Cloud provider used: Microsoft Azure only

  • Repo used: GitHub

  • Deployment method: GitHub Actions + ARM templates

  • Cost limitations: Keeping cost to a minimum, meaning a few pence a month

  • Started: 11 May 2022

  • Completed: 14 June 2022

  • Time period involved: about a month

Introduction

This learning log is dedicated to completing the Resume Challenge devised by Forrest Brazeal using only Azure and Github. He has a book to give extra guidance. I didn't buy it. I wanted to the extra challenge of figuring it out for myself with just the rudimentary steps outlined on his website. I’ve completed the challenge a couple of months ago.

Since studying for the Azure Fundamentals exam, I’ve also realised, I could have significantly sped up things up by having the Azure portal generating an ARM template to deploy all the Cloud Resume Challenge resources for the entire resource group. Back then in May, being a newbie to Azure, I thought the portal could only do the ‘Export template’ function for each individual resource. However, I must say, I would not have learnt half as much if I had let Azure generate the whole ARM template anyway, as I had to understand how to construct ARM templates for myself, including…

  • what resources were dependent on what other resources, and therefore the order of deployment required
  • whether I should use nested or linked templates
  • how to accomplish my goal of having one parameter file for the deployment of one resource group with all the resources deployed inside it

If you spend the time to read my log, you will feel my struggles as I run headlong into various brick-wall like challenges and experience my process of self-discovery as I manage to heave myself over them. I promise that the blood and sweat involved made my journey a beautiful thing for you to watch, but it was satisfying to get there in the end.

Some things I learnt

How to...

  • clone, push, pull repos from GitHub
  • write, test and deploy an API in Azure Functions
  • deploy an Azure Function through Visual Studio Code or GitHub Actions
  • write, update, test GitHub Action workflows
  • how to branch in a GitHub Action workflow (parallel processing)
  • how to setup dependent steps in a GitHub Action workflow
  • troubleshoot an ARM template that won't validate
  • design a linked ARM template with multiple dependent resources
  • get multiple ARM templates to use one master parameter file
  • run Azure CLI/PowerShell commands within GitHub Workflow
  • manipulate strings in GitHub Workflow

Summary of what was deployed

  1. Resource group
  2. Azure Cosmos DB Account with SQL API + container
  3. Storage Account for static website files
  4. Storage Account for deployment of Azure Function artifact
  5. Azure Function API
  6. Azure DNS Zone
  7. Azure CDN Profile + Endpoint

Before starting this log...

11.5.22 9:28pm

I’ve already managed use the portal to manually create the resources so that it will bring up my resume if I go to www.rayrossi.net. I made a simple HTML resume, CSS file, and JavaScript file (with script to connect me to a free API that will run my page counter until my own API is working). I then learnt to:

  • Create the resource group
  • Create a storage account (general purpose v2)
  • Under the storage account, enable ‘Static website’, change the ‘Index document name’ + upload my HTML, CSS and JavaScript files to the ‘$web’ container
  • Change my DNS servers to Azure-managed ones
  • Create a DNS zone for rayrossi.net, including CNAME records for www and MX records for Zoho mail
  • Create a CDN profile and endpoint ‘rossi.azureedge.net’, add custom domains rayrossi.net and www.rayrossi.net, and enable ‘custom domain HTTPS’ for www.rayrossi.net

Task 1: Enabling HTTPS on root domain

The problem

My resume will open if I go to www.rayrossi.net. However, if I go to rayrossi.net (also called the root, naked or apex domain), most browsers warn you about it not being secure. This is because rayrossi.net does not have custom HTTPS enabled (see image below).

Image description

Microsoft Azure no longer supports automatically providing a certificate to enable HTTPS on a root/naked/apex domain.

The plan

Figure out how to provide my own certificate to allow HTTPS to be enabled and secure the apex domain, rayrossi.net.

Actions

Trying to provide my own certificate...

Step 1

Generating my own self-signed certificate within Key Vault  Certificates. It generated fine, but when I tried to enable HTTPS within the CDN endpoint for my root domain using the self-signed certificate, it said it could not be used.

Step 2

I tried generating a free SSL certificate using openssl (in Linux). This website was extremely helpful to get the right commands:

https://systemzone.net/how-to-create-free-ssl-tls-certificate-with-openssl/

Since this only generated a .cer file, I then used this website to help me use the private key file and the .cer file to generate the .pfx file required by Azure.

https://www.ssl.com/how-to/create-a-pfx-p12-certificate-file-using-openssl/

I tried importing the .pfx file into Azure Key VaultCertificates. This worked fine, but when trying to enable HTTPS for the CDN endpoint for my root domain, it still said the certificate was self-signed so could not be used.

Step 3

I understood that the certificate needed to be generated by a Microsoft approved CA (certificate authority), a list of which can be found here:

https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT

I tried to find one on the list that might sign a certificate for free. I found out that in a 2016 article, WoSign provided SSL certificates for free and that WoSign are in the list of Microsoft approved CAs. However, going directly to the WoSign website (and using website translation) it initially appeared to have free certificates, but this wasn’t true when you actually tried to ‘get’ one (you were taken to a price page/shopping cart for at least 488 yuan for a 1 year certificate.

After deciding to work on a redirect instead...

Step 4

I decided that if I couldn’t get rayrossi.net to work directly, I searched Google on how to get Azure CDN to redirect from rayrossi.net to www.rayrossi.net (which is enabled for HTTPS). I found this article:

https://clemens.ms/hosting-a-static-site-on-azure-using-cdn-and-https/

Under the section, ‘URL rewire rules’, I found some advice on the redirect from the root domain to www. It suggested using a rule using the condition ‘If the domain starts with’ rayrossi.net. However, it didn’t make any difference. Then, I found this…

https://stackoverflow.com/questions/62612833/azure-cdn-how-to-redirect-from-root-domain-to-www-some-rules-from-rules-engin

It suggested doing what I was trying to do, but in the end said it wouldn’t work, and recommended buying a certificate. Back to square one… or was I?

I thought I’d try another rule but trying to use a different condition, for example, not starting with ‘www’. However, there was no condition for not starting a URL in a certain way. However, I found there was a ‘not contains’ condition. I put the full condition as ‘If the URL “not contains” “www.rayrossi.net” then permanently redirect to “www.rayrossi.net”. To my surprise, it finally worked!

Task 2: Create ‘Visitor Counter’ API

The plan

To use Azure Functions + Cosmos DB to create a working visitor counter API that will work for any website, not just for my resume HTML page.

Rationale

I want to be able to use the visitor counter API for any website I might create in future, and to separate do visitor counts on any page or location within a website.

Therefore, my Cosmos DB container would need to have a separate record/entity/document for each unique URL. This ‘url’ would need to be passed into my API. I would need to check if an existing visitor counter record already existed for the url (if so, increment it by 1 and return the new visitor count). Or if a record didn’t exist for the ‘url’, it would create a new record and initialise the visitor counter at 1 (and return a visitor count of 1).

Actions

Step 1

I tried to create an Cosmos DB Account using Azure (using the ‘serverless’ and least redundancy + backups to keep the costs down), but then it kept coming up with an error that it could not create it because it said the ‘UK South’ location (my default location for all my resources as it’s geographically closest to me) was too busy. After multiple attempts on different days, I tried the ‘UK West’ location, and it created it successfully.

Step 2

After doing some research on getting started with Azure Functions, it seemed most were using the SQL API-type of Cosmos DB Account, so I had mistakenly made the Tables API-type. I created the SQL API version of the Cosmos DB Account. I then created a the database ‘ResumeDB’ and within that, a container called ‘VisitorCount’.

Learning to create an Azure Function...

Step 3

I now had to learn to create an Azure Function. I followed a tutorial (https://evan-wong.medium.com/create-api-using-azure-function-with-python-and-azure-cosmos-db-afda09338d82) to create an initial function locally on my Windows computer with Visual Studio Code. It also gave me some sample code on how to create a record and list records in a Cosmos DB.

Step 4

I changed the local.settings.json file so that it contained the connection string I found on the Azure portal for the SQL-API Cosmos DB account  Keys. I also pasted it under “ConnectionStringSetting” in the function.json config file. The problem was the function just kept generating an error, and I could not, for the life of me, figure out how to get to connect to my Cosmos DB without the error.

Step 5

After Googling the error, there were very few solutions I could find to the problem. The other problem was, I didn’t really understand why there was the local.settings.json and then a separate function.json. I didn’t understand why I need to have the Cosmos database connection string duplicated in multiple places. After working through all the possible solutions I could find online (it was extremely frustrating, as there were few and none of them really made sense to me – my understanding of Azure Functions was still too limited at the time), but I eventually tried the final solution.

Comparing the local.settings.json file with the function.json, I came to see there was a difference in the name of the key storing my duplicated connection strings. In local.settings.json, under “ConnectionStrings”, there as the key “AzureCosmosDBConnectionString”. In function.json, there was the key “connectionStringSetting”. I thought to myself, why is one called setting and the other one the actual connection string? I thought, what if “AzureCosmosDBConnectionString” needs the actual connection string, but the setting key needs a value of “AzureCosmosDBConnectionString” so it knows the variable name storing the actual connection string. It seemed to make sense that this would avoid the duplication of having to repeat the actual connection string in so multiple places.

Finally, the connection to the Cosmos DB worked and I was able to create a record in the database (reviewing the Cosmos database container’s data entries/documents/records directly through ‘Data Explorer’ in the Azure Portal.

Step 6

Starting with more sample Azure Function Python code to list records in a Cosmos DB, I was able to modify it to display the visitor records in my database. I then used this code to help me use the ‘url’ query string passed into the function to query the database. If a record/document didn’t exist, I had working code to add a record with an initial visitor count of 1.

Step 7

My next problem is I needed to be able to update an existing Cosmos database record if the url already had a visitor count. I needed to be able to update this record by incrementing the visitor count by 1. However, I could not modify my existing code to update a record. After a Google search, I found a stackoverflow article that said that the replace_item function can be used to update a Cosmos database record. Eventually, I found sample code provided by Microsoft on GitHub on how to use the replace_item function. I used it and it worked.

Step 8

I next added a route to my function.json file ("route":"NewVisitor/{*url}") and changed my init.py to pass in the ‘url’ as a route parameter rather than a query string.

Step 9

My next problem was the ‘url’ route string passed into the function was used immediately query the database using this query "SELECT * from c WHERE c.url = {url}". This meant that when initially running the function, the url value was {*url}, creating a visitor record for the url {*url}. To solve this, so I wrote a condition statement in my init.py file that avoided the code which looked up the database if the url passed in was equal to {*url}.

Step 10

My next problem was the output of my function. How could I create an API response that I could actually pass into my resume.html page using JavaScript? I used Postman to analyse both the body and headers of my API’s response. I compared it to the response of the API visitor counter of https://api.countapi.xyz. I formatted my body in exactly the same format. I also had to figure out how to set the head ‘Content-Type’ to text/javascript (also imitating the countapi.xyz’s visitor counter.

Step 11

With the API working, I now set about breaking my function into units of code (using the Single Responsibility Principle). I have delegated responsibilities of my init.py file into my /shared_code/extra_functions.py. Certain bindings are made with the code in the main() function through the function.json. This means, that sometimes a binding variable is passed as a parameter to a function.

Step 12

I tried to add some integration tests into the function. I structured my integration tests as recommended by this article of python-testing. Thus, I put made a test folder, within that I created an ‘integration’ folder. Within this folder, I created a ‘test_integration.py’ file to hold my integration tests. Within this folder, I also created a ‘fixtures’ folder to hold a ‘test_basic.json’ file to hold dummy data. The point of the dummy data is that it could be used instead of the Cosmos DB data if there was a problem accessing the online database. I setup some integration tests to run with the dummy data to verify the fields/column headings to be correct, the number of records of dummy data, and that the dummy data is passed in as a list of dictionaries.

Step 13

Problem: I wanted to setup a failsafe so that if the connection to the Cosmos DB failed, a message would be logged such as ‘Cosmos DB cannot be accessed, so dummy data will be used as a recordset instead’. However, there is a problem being able to catch the exception/error if the Cosmos DB connection fails.

I read the Microsoft article (https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-error-pages?tabs=python) regarding this multiple times, and I had no joy in being able to catch exception errors regarding the bindings. There is next to no guidance on catching binding errors. Without being able to catch the exception error, it was going to be difficult to make the switch to the dummy data.

I tried catching the exception by wrapping the entire code of the main() function in a try/exception, however, this didn’t catch the binding exceptions. Any error connecting to the Cosmos DB simply caused the function to fail.

Therefore, I disabled the integration tests running at the top of my main function because it seemed pointless for the tests to run on every run of the function when the dummy data won't not be used as a failsafe for a failed Cosmos DB connection.

Step 14

After a re-think, I decided to create another function in the same project called NewVisitor. This implements the same code, but without the Cosmos DB bindings. To help me work towards test Cosmos DB functions within separate API functions that do not use database bindings found in the function.json file, I created the Delete and ListAll functions. To delete a record, you need to pass the record’s id. In order to find out the id you need to pass, ListAll lists all the records, and you can set the order the records by passing the orderby parameter with the field you want to order the records by. You can also put the records in reverse order by passing the rev parameter set to true.

Step 15

I setup tests using Pytest to check the difference in speed (see screenshots). Without the automatic DB bindings provided by the function.json, I found the speed to be a total of 0.07 seconds slower for 4 database calls (2 for a brand new website visitor and 2 for a re-visit): 10.62 sec with bindings vs 10.69 without the bindings.

Step 16

I setup integration tests to check the that the correct API outputs are occurring with 2 valid URL addresses and also with one 1 invalid URL.

Problems I ran into setting up the Pytests:

a. I wanted to pass in parameters into the test functions but couldn’t see a way to do it. This Pytest documentation helped me re-run tests using different test inputs and expected outputs: https://docs.pytest.org/en/latest/example/parametrize.html

b. I created a RunTest function to show the results of the Pytests on the screen, but the Pytest results only appeared in the terminal’s screen logging. I found some a snippet of code to help me backup the stdout to get the values outputting to the screen. I was then able to return this as a HttpResponse.

c. I had a problem getting the API HTTP response (test output) to match the expected output. By analysing the Pytest error, I found that the ‘response.content’ was rendered as bytes (i.e. b’’) while the expected output was a normal string. To convert the expected output string to bytes, I found that I either needed to put a b before the expected output string (e.g. b’’) or I needed to add .encode(‘UTF-8’) after the end of the string when I needed to pass parameters into the string.

Task 3: Automate PDF creation from the HTML page

The plan

I want my resume (HTML) page to automatically generate the PDF file based on the HTML of the current page rather than needing to separately having to convert and save a PDF every time I change my HTML resume.

Actions

To help, I found this page: https://www.freakyjolly.com/multipage-canvas-pdf-using-jspdf/. This showed how to automatically generate the PDF using client-side JavaScript.

I wanted to use a HTML <a> tag to run the javascript function. I found out how to do this using this StackOverflow article:

https://stackoverflow.com/questions/16337937/how-to-call-javascript-from-a-href.

Task 4: Automate deployment of front-end

The problem

I want to automate the front-end so that if I push changes to any files in the folder “front-end” at my Github repo at https://github.com/daraymonsta/resume-challenge-public, the changes will automatically be pushed to Azure Blob Storage where my static website is stored.

Actions

Stage 1: Getting my Github repo working

I logged into Github. I added a new repo called “resume-challenge-public” (I chose public access and for it not to automatically to add a README.md file)

I made a local folder in Windows where I wanted to store my Github repos. I created the folder called “resume-challenge-public”. I went to my cmd line and went to that folder. I then ran these commands to test I could update my newly created repo.

git init
echo "# resume-challenge-public" >> README.md
git add README.md
git commit -m "add README.md"
git branch -M main
git remote add origin https://github.com/daraymonsta/resume-challenge-public.git
git push -u origin main
Enter fullscreen mode Exit fullscreen mode

After running the last command, I entered the personal access code I already created on Github.

The push command finished running (I could see the uploads) but I checked the Github repo to double-check the README.md was now there, which it was.

I added my front end resume folder (which already included my static resume HMTL, CSS and javascript files) to the local “resume-challenge-public” folder. I renamed the folder containing the resume files “front-end”.

I then added, committed and pushed the folder to the remote Github repo.

Stage 2: Automating Github changes to the “front-end” folder to update Azure Blob Storage

To help, I found this Microsoft article: "Set up a GitHub Actions workflow to deploy your static website in Azure Storage":

https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blobs-static-site-github-actions?tabs=userlevel

Following the instructions, I realised I would need to install Azure CLI to run commands, so I went to this webpage to install the latest Azure CLI for Windows.

After installing the Azure CLI, I ran a new Command Prompt and then the command ‘az’ to check Azure CLI had installed successfully – it had.

I now had to generate the deployment credentials using this command.

az ad sp create-for-rbac --name {myStaticSite} --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} --sdk-auth
Enter fullscreen mode Exit fullscreen mode

It asked me to run ‘az login’ to setup the account. After running the ‘az login’ command, I was re-directed to the browser to login to the azure account. I then re-tried the command above to generate the deployment credentials. The new problem is that it said the tenant listed on the screen required Multi-Factor Authentication (MFA) and to use ‘az login –tenant TENANT_ID’ to explicitly login to a tenant.

I tried this command:

az login --tenant 3d5f20d4-708d-43e3-bc6e-4d368019d594
Enter fullscreen mode Exit fullscreen mode

It then re-directed me to the web browser to do MFA (using my phone). It then returned the following:

[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "xxx",
    "id": "xxx",
    "isDefault": true,
    "managedByTenants": [],
    "name": "AzurePayAsYouGoSubscription",
    "state": "Enabled",
    "tenantId": "xxx",
    "user": {
      "name": "greenaussieman@outlook.com",
      "type": "user"
    }
  }
]
Enter fullscreen mode Exit fullscreen mode

I re-attempted to run the “az ad sp” command above. This time, the response was some JSON containing the credentials that I would need next.

I then went to Github, Settings > Secrets > Actions  New Repository Secret, and created a new secret by pasting in the response above.

I created the a Github Action workflow YAML file called frontend.yml

First the whole repo was copied to $web Blob container. I found out from Microsoft docs (see link below) that the -s (or --source) is a parameter specifying the source files. I updated this from “.” to “/front-end/.”

https://docs.microsoft.com/en-us/cli/azure/storage/blob?view=azure-cli-latest#az-storage-blob-upload-batch

I wanted to re-run the workflow file without needed to commit to the repo again, so I added the following to the YAML file (it was originally there by default but I removed it following the instructions on the Microsoft article “Set up a GitHub Actions workflow to deploy your static website in Azure Storage” mentioned above.

After re-running the workflow file, I found that the $web Blob container files had still not been updated. I looked at the details of the “build” on Github to investigate. The “Upload to blob storage” step of the workflow had been successful – making it even more strange. Therefore, I looked through the details on this part of the workflow. I found errors which said that “BlobAlreadyExists”. It also suggested I add “—overwrite” to my command, which is what I did. When I re-run the workflow, the files in the Blob storage were updated successfully this time.

The step “Purge CDN Endpoint” did not pass. It said that the endpoint was not found. I took out the prefix “https://” from the “name” of the endpoint in the command. This still had the same error – the endpoint resource was not found. I looked up the Microsoft docs on the command (https://docs.microsoft.com/en-us/cli/azure/cdn/endpoint?view=azure-cli-latest#az-cdn-endpoint-purge) but it didn’t give an example of a valid name for this parameter. I found a StackOverflow article:

https://stackoverflow.com/questions/54899815/not-able-to-purge-azure-cdn-endpoint-using-azure-cli

It suggested removing the suffix “.azureedge.net” from the endpoint name. I did this, ending up with just the endpoint name “rossi”. I re-run the workflow script manually, and this time everything passed.

Task 5: Getting an initial CI/CD pipeline working

The problem

I need a CI/CD pipeline on GitHub Actions to trigger when I make a commit requiring a automatic redeployment.

Actions

Stage 1: Deciding on my GitHub repo file structure

I had to decide on my file structure of my GitHub repo so that it would easily serve the purpose of deployment of the different parts of the front-end and back-end.

I finally decided on:

  • Main repo: https://github.com/daraymonsta/resume-challenge-public
  • Under this, two main subfolders: “front-end” and “back-end”
  • Under “front-end”, two subfolders: “arm-templates” and “static-website”
  • Under “back-end”, two subfolders: “arm-templates” and “func-app-visitor-api” (for the Function App’s API code)

Stage 2: Learning to use GitHub CLI commands

I learned to “git fetch” and “git pull” changes I made directly via GitHub down to my local folder. I also pushed changes from my local git folder to GitHub using these commands:
a. git status (to check for changes)
b. git add . (to add all changes to the index/staging)
c. git commit -m “a message regarding the update” (to commit the changes made on my local machine)
d. git push -u origin main (pushes change from my local repo folder to the origin/remote’s main branch

Stage 3: Learning about ARM templates

  1. I had to learn how to integrate the ARM templates within a main ARM template. These YouTube videos helped provided the basics on this:

I found out I would need to either use nested or linked templates. I wanted to keep each resource I needed to create modularised, so I chose to use linked templates.

I setup my Visual Studio Code with the extension Azure Resource Manager (ARM) Tools to to use Intellisense when coding ARM templates (this extension was suggested in the YouTube Modularization video above).

Under each resource that I had created manually in the Azure portal, I went to Automation Export template and copied the template into Azure Code in its own separate ARM template (JSON) file (these became the link templates).

So that the link templates in the last step could be referenced in a “main” ARM template for both the front- and back-end, each link template needed a public URL, so I pushed these link templates up to the remote GitHub repo and worked out the public URL for each (to get the correct URL, I first opened a link template on the GitHub website, then clicked on the “Raw” button).

I created a main ARM template (JSON) file using the basic structure suggested in the YouTube ARM Templates Modularization video – I created one for both the front- and back-end and then added the URLs found using the previous step.

I needed to setup separate a GitHub Action workflow (YAML) file to deploy the front- and back-end Azure resources. Each workflow would need to validate and then create the Azure resources. I used the a structure similar to that suggested in this video "GitHub Actions for Azure Resource Manager":

https://www.youtube.com/watch?v=3dhTbyfW7Zc

I worked on trying to get my front-end ARM template workflow to work error-free before tackling the back-end. When I ran the workflow manually, I kept getting the error: “Error reading JToken from JsonReader. Path ‘ ’, line 0, position 0”.

It related to the bolded line of code in my front-deploy.yml workflow file:

steps:
      - uses: actions/checkout@v2

      - name: ARM Template Toolkit (ARM TTK) Validation    
        id: armttkvalidation
        uses: aliencube/arm-ttk-actions@v0.3
        with:
          path: front-end/arm-templates/front-main-deploy.json

      - name: Test result
        shell: bash
        continue-on-error: true
        run: |
          echo "${{ toJSON(fromJSON(steps.armttkvalidation.outputs.results)) }}"
Enter fullscreen mode Exit fullscreen mode

I looked up the resources for ARM TTK Actions on the GitHub (https://github.com/aliencube/arm-ttk-actions). It helped me adjust my specified path so that it would be a valid folder. It also helped me specify a test to skip as I knew that at least one of the my apiVersions was more than 2 years old and I didn’t want it to cause an error. However, using the suggested code by the creators of ARM TTK on their GitHub page did not eliminate the error.

I understood the error related to converting to/from JSON format when the results returned were blank. At first, I considered possibly implementing a conditional in the ARM template, so that if the results were blank, it would not try to convert to/from JSON. However, to be sure I could eliminate the error I replaced the bolded line of code above with:

echo "Results: ${{ steps.armttkvalidation.outputs.results }}"
Enter fullscreen mode Exit fullscreen mode

It still didn’t make sense that the ARM TTK Validation results were blank, so I looked at the details for the actual ARM TTK Validation in the GitHub workflow even though the validation ticked as having passed. I found this error: “No azuredeploy.json or mainTemplate.json found beneath.” I realised I need to rename my front-main-deploy.json to azuredeploy.json to help with this. I then put re-instated the original bolded line of code above to convert the result to and from JSON format.

Another problem was an file not found error when running the workflow on this section of code:

      - name: Deploy ARM Template
        id: deployarmtemplate
        uses: Azure/arm-deploy@v1
        with:
          scope: resourcegroup
          subscriptionId: ${{ secrets.AZURE_SUBSCRIPTIONID }}
          region: ${{ env.region }}
          resourceGroupName: ${{ env.resourceGroup }}
          template: front-end/arm-templates/main-front-deploy.json
          deploymentMode: Incremental
Enter fullscreen mode Exit fullscreen mode

It was strange since the validation of the ARM template also using “Azure/arm-deploy@v1” worked fine using the same template path. Then I realised that after the ARM Template validation using “Azure/arm-deploy@v1”, the template was uploaded as a build artifact. When actually deploying using the code above, since the uploaded build artifact was to be used, so it no longer needed the path on the file on GitHub. I specified the file name only (template: main-front-deploy.json) and it worked fine.

Stage 4: Simplifying the Front-End and Back-End

The storage infrastructure is currently shared between the static web Blob storage and the storage used by the Function App. Up to this point, I was thinking that the infrastructure could be delineated between front- and back-end as follows:

Front-end

  • Storage account for Blob static web storage
  • CDN
  • DNS

Back-end

  • Storage account
  • Cosmos DB
  • Function App (API)

However, after doing some research on what front-end and back-end means, it seems to make sense that all infrastructure goes in the back-end. Therefore, I have revised by GitHub repo:

  • front-end/static-website files were just put in the front-end folder
  • I updated the workflow file to upload the website files accordingly
  • I disabled the arm templates from running temporarily.

I tested the GitHub Action workflow to see if it successfully deployed the static website to blob storage. It was successful. Yay!

Task 6: Refining front-end deployment

The problem

If the back-end (all the infrastructure) is generated through ARM templates and names of resources changed, then this would need to be reflected in the workflow file run to upload the static web files to the Blob storage.

Actions

I checked what resource references are required for the upload to Blob storage (by looking at the CLI commands run in the GitHub workflow file). These are:

  • resource group
  • storage account name
  • CDN profile name
  • CDN endpoint name

I was thinking these could be saved in a azuredeploy.parameters.json file, which (along with other parameters) will also be used for the back-end infrastructure generation. I then found this Stack Overflow article:

https://stackoverflow.com/questions/71523580/loop-through-json-file-and-set-each-value-as-a-variable-within-github-actions-wo

I wrote a test.yml workflow file to see if I could read the parameters from the azuredeploy.parameters.json and convert them to environment variables in the YAML workflow script. It worked, so I then used implemented and tested the “JSON to variables” code in the front-end GitHub workflow file (front-deploy-web-to-blob.yml).

Task 7: Improving the back-end deployment

The problem

I need to deploy the back-end (infrastructure) using ARM template deployment (infrastructure as code).

The plan

I want to tackle the deployment with dependencies last. The Cosmos DB Account and Storage Account do not have dependencies, so I’ll work on these first. Then the Function App afterwards.

So that I can make changes to the deployment in one place, I want all parameters for all ARM templates and GitHub Action workflow jobs to come out of one parameter file (azuredeployment.parameters.json).

It makes sense that should test each ARM template separately before running them as linked templates, but I found out that when I tried doing so, I would each time have create a separate parameters file with only the parameters needed by that one template otherwise it would not pass validation.

Since in the simple Function App ARM template I found (and I was attempted to use as a base), that the PackageUri needed to be specified, I did some research on what this meant. It seems that it would need to specify the Uri to a Zip file of the code.

Extra problem: The Unknowns

There were lots of unknowns that I would need to investigate to see whether they were even possible, before figuring out what would work, let alone what was best. For example:

  • Can I deploy the Function App service using an ARM template with the PackageUri parameter set to “1”? This is suggested by this website (https://www.eliostruyf.com/deploy-azure-functions-package-github-actions/). It is a way of saying that there is no Zip file of code specified for the Function App or that the code will be run locally.)
  • If that works, how should the code be deployed?
  • Should I Zip up my own code before deploying?
  • Would my own Zip file of the code work when deployed?
  • If I Zip up my own code, how would it be deployed?
  • Could it be done within a GitHub Action workflow?
  • Should I manually deploy the Zip file to the Blob container, then update the Application Setting named WEBSITE_RUN_FROM_PACKAGE with the Uri of the Zip file?
  • Will this be enough for the Function App to run? Or does something else need updating also?
  • How will I get the SAS token needed as part the Uri so that access will be given to the Zip file in the Blob container?
  • Will the GitHub Action code suggested on this website (https://www.eliostruyf.com/deploy-azure-functions-package-github-actions/) work to Zip the code and deploy it?
  • Will it be better to use the GitHub Action code suggested on the Microsoft website (https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-github-actions?tabs=python) to deploy the Function App? (This one seems to add Python dependencies which might be important.)
  • If I use either of these two suggestions, I still need the Publish Profile (credentials) from the Function App, plus save it as a GitHub secret for access within the GitHub Action workflow:
  • It is possible to add retrieve the Function App’s Publish Profile from within the GitHub Actions workflow? Answer: This GitHub Action page (https://github.com/marketplace/actions/azure-app-service-publish-profile) showed it was possible.
  • It is possible to add a GitHub secret within the GitHub Actions workflow? Answer: I found a Stack Overflow article (https://stackoverflow.com/questions/57685065/how-to-set-secrets-in-github-actions), that suggested it was possible if you used the GitHub Actions API, but it didn’t look easy.

Discovering the answers

Note: The current working version of the Function App was manually created with the Azure Portal in the resource group CloudResumeChallenge. The code was deployed to it using an Extension in Visual Studio Code. This version of the Function App will be referred to as the “working Function App”.

Get-AzActivityLog -CorrelationId 8ad64caf-5462-43ec-9f39-291054ceca97 -DetailedOutput

Get-AzLog `
   -CorrelationId 8ad64caf-5462-43ec-9f39-291054ceca97 `
   -Verbose
Enter fullscreen mode Exit fullscreen mode

However, the commands didn’t really give me any more helpful details – they just repeated the same error message. Therefore, I needed to deploy the nested template for Function App by itself (I already knew the other two nested templates worked). I didn’t want to have to go through GitHub Actions to test the individual ARM template as I wanted the quickest and most direct way of testing it. I learnt how to validate the template individually using this CLI command (https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-tutorial-use-parameter-file?tabs=azure-cli):

az deployment group validate --resource-group CloudResumeChallenge88 --template-file function-app.json --parameters azuredeploy.parameters.json
Enter fullscreen mode Exit fullscreen mode

This met with an error because I was using the same parameter file as before (which included all the parameters for all the services being deployed – used by the main template with nested templates). Because this contained parameters that were not being passed into the Function App ARM template, I created a separate parameter file which only listed the parameters passed to into this template.

Task 8: Testing Creation of the Package Uri

The aim

If I was to try deploying the Zip file manually (before deploying the Function App’s ARM template), I would need to be able to generate a SAS token, retrieve the Blob’s URL (for the Zip file to be deployed to a Blob container), and then concatenate them together to make the Package Uri which I could then specify in the Function App’s ARM template.

Actions

Potential issue: I would need to be able to get the output of Azure CLI commands run within GitHub Actions.
The fix: I found a way to assign the output of a CLI command to a variable, which could then be accessed in later steps.

Potential issue: There seemed to be no way to get the URL for the storage account without having the connection string first.
The fix: I found I could get the full connection string using the “az storage account show-connection-string” CLI command.

Potential issue: The connection string command returned JSON with extra parts beside the actual connection string.
The fix: I used the replace commands in the GitHub Action script to remove the irrelevant parts.

Success: I ended up creating a working GitHub Action workflow script called test-sas-new.yml.

Task 9: Testing Automation of Creating a Zip File of the Function App Code

Rationale

I could see it was going to be a lot more work to put the Zip package of code in Azure storage as a blob, and generate the SAS to access it, than it would be to just automate the Zip file to be saved on GitHub (which doesn’t need a SAS to access it).

The aim

I wanted to be able to create a Zip file (Package) of the code already in the folder back-end/func-app-visitor-api in the GitHub repo, then save it on my GitHub repo so it could be used as the public URL I could specify for the PackageUri in the Function App ARM template.

Actions

I found multiple ways to Zip up the files using a GitHub workflow. I could then see I would need to ‘git push’ the added Zip file to the main repo. I knew I would then need to save GitHub credentials as a GitHub secret so they could be used for the ‘git push’, but then I realised I wasn’t sure that even if the Zip file was manually put there in the GitHub repo, whether the location of it on the GitHub repo would work for deployment as the parameter PackageUri in the ARM template.

Task 10: Trying Function App deployment with a manually created Zip file (no Python dependencies)

Pre-setup

I manually created a Zip file of my Function App’s code on my local Window computer, then pushed it to the GitHub repo.

The aim

I wanted to see if deploying the Function App with the ARM template (with the parameter PackageUri pointing to the URL location of the Zip file on the GitHub repo) would create a functioning API.

The problem

I could see that after deployment of the Function App (specifying the PackageUri as the backend.zip on my GitHub repo), testing an API function using a browser shows the function wasn’t functioning.

How did I know it was a problem?

I compared it to the running version of the Function App in the original resource group CloudResumeChallenge (I tested it by going to the Azure portal, the resource group, the Function App service, Functions, the AddVisitor function, Get Function Url, copying the Url and pasting it into the browser, then even trying it with a website to pass into it). I could see the Function App deployed via the ARM template would have an error in the browser when trying to run. This was because the zip file used for deployment (which I had just zipped up manually of the code folder) was missing the Python dependencies.

I'd have to try something else...

Task 11: Trying Function App deployment with a Zip file including Python dependencies

The aim

I wanted to see if I could make the Function App work if I used a Zip package of the Function App code that was already being used in the working Function App.

Actions

When I replaced the Zip file (the one I zipped up manually) with the zip file I downloaded of the functioning Function App in the resource group CloudResumeChallenge (I did this by going to the storage account containing the Function App, Containers, going into the ‘github-actions-deploy’ container, clicking on the zip file, and clicking ‘Download’), the Function App worked.

Task 12: Trying to get the Zip file to deploy to the storage account associated with the Function App

Rationale

I could see that the Zip file was not automatically deployed to the storage account associated with the Function App when the ARM template was run. I could see it that by using the GitHub repo location of the Zip file as the PackageUri parameter in the ARM template, the same Uri would become the Application Setting WEBSITE_RUN_FROM_PACKAGE. This means that the Function App would be run from the code in the GitHub zip file. I felt like this was a bit dangerous as the Zip package of code feels too exposed on GitHub. If anything happened to the backend.zip file on GitHub, the Function App would immediately stop working.

Note: Through the Azure Portal, it was possible to change the code package being used by the Function App (go to the Function App, Configuration, then look at Application Setting with the name WEBSITE_RUN_FROM_PACKAGE. After clicking ‘Save’, the Function App would re-start using the new code package location).

The aims

A. Not specifying the PackageUri when the Function App was initially created by the ARM template, but setting it was “1” (which I read could be done from my reading).
B. Deploying the package from the code on GitHub instead of manually creating a zip file when I wanted to deploy.
C. Having the python dependencies installed as part of the code’s deployment to the Function App

Actions

Deploying the Function App from an individual ARM template (with its own tailored parameters file) with the PackageUri set to “1” worked. To validate the template, I used the CLI command:

az deployment group validate --resource-group CloudResumeChallenge88 --template-file function-app.json --parameters function-app.parameters.json 
Enter fullscreen mode Exit fullscreen mode

After the validation passed, I then created the Function App using this CLI command:

az deployment group create --resource-group CloudResumeChallenge88 --template-file function-app.json --parameters function-app.parameters.json 
Enter fullscreen mode Exit fullscreen mode

Since I now knew the Function App could be deployed without having to specify the Zip file of packaged code, I then concentrated on aims B and C.

I incorporated the suggested code on the Microsoft docs into a test GitHub workflow (https://docs.microsoft.com/en-us/azure/azure-functions/functions-how-to-github-actions?tabs=python). Once I knew it worked, I integrated it with the back-deploy.yml workflow.

Task 13: Ensuring the deployed Linux version matched

Rationale

In deploying the Function App in the GitHub Actions workflow (back-deploy.yml), the python dependencies need to be installed and the python version specified for the deployment of the code. I didn’t want this to mismatch the python version specified in the ARM template for the Function App.

The aim

Ensure a matching version of Python is used for the ARM deployment of the Function App and the Python version of the dependencies installed in the code deployment.

Actions

I modified the ARM templates and parameter file so that it gets the version ‘3.9’ from the pythonVersion parameter (in the azuredeploy.parameters.json). The linuxFxVersion parameter passed into the Function App ARM template was originally set as ‘python|3.9’. I modified the parameters in the parameters file so that linuxFxVersion is now made up of a concatentation of the functionWorkerRuntime (which sets the language as ‘python’) and the new parameter pythonVersion (which is ‘3.9’).

Task 14: Getting the front-end to deploy if the back-end deploys using GitHub Actions

Building understanding

What I did: I did some reading about GitHub workflows (https://docs.github.com/en/actions/using-workflows/about-workflows) until I found the reference to what was relevant: Reusing workflows (https://docs.github.com/en/actions/using-workflows/reusing-workflows). This gave some example code I was able to imitate to enable the calling of front-deploy.yml from back-deploy.yml.

Actions

I updated front-end GitHub workflow (front-deploy.yml) so that it could be called. I added this code:

on:
  workflow_call:
    secrets:
      AZURE_SUB_CREDENTIALS:
        required: true
Enter fullscreen mode Exit fullscreen mode

I also changed the Azure login credentials used to secrets.AZURE_SUB_CREDENTIALS (which has access to the entire subscription – so will work for any resource group created in the caller workflow), instead of the secrets.AZURE_CREDENTIALS_RESUMESTORAGE002 (which is scoped to only access to the CloudResumeChallenge resource group).
I updated back-end GitHub workflow (back-deploy.yml) so that it calls the front-end GitHub workflow after the ARM template deployment finishes. To do that, I added this line to the job that calls front-deploy.yml:

needs: [deployCode]

Task 15: Deploying DNS Zone and CDN Endpoint

Pre-requisite

A domain name I can use for testing – I have a domain name that I can temporarily use for testing: generalsettings.com

The plan

Using the Azure Portal, create just what is needed for each step, exporting the ARM template (or checking what’s changed in the template) along the way to help me create my own ARM templates which I can use to deploy everything in the right order.

What I learnt

  1. When you create a DNS Zone on the portal, it automatically creates the NS and SOA records

  2. Before Azure will allow the CDN endpoint’s custom domain to be created, you need to:

    • add a CNAME record to the DNS Zone with www.generalsettings.com pointing to the CDN endpoint.
    • make sure the Name Servers for your domain name point to ones listed when you create the DNS Zone. You need to do this with whoever is hosting your domain name.
  3. Once the CDN’s custom domain is created, it still needs to be enabled with the CLI command “az cdn custom-domain enable-https”. However, because provisioning an automatic (Microsoft-managed) certificate on a custom domain takes time, it is best to check if HTTPS has already been enabled on your custom domain. Otherwise, each time you issue the CLI command “az cdn custom-domain enable-https” the certificate is re-issued from scratch and it takes time (10-15min). To check the custom-domain custom HTTPS provisioning state, you use the CLI command “az cdn custom-domain show”. This will return JSON – if the “customHttpsProvisioningState” is “Enabling” or “Enabled”, then you don’t need to issue the command ““az cdn custom-domain enable-https”.

  4. There are two essential rules I originally created using portal’s “Rules Engine” (in the CDN endpoint). These rules do two things: 1) Enforce HTTPS – ensuring anyone that goes to the HTTP of the static site will automatically change to HTTPS 2) will change anyone going to the root domain will permanently redirect them to the www subdomain. By using the “Export template” feature, you can find out what portion of the ARM template generates these rules.

Actions

I used the a generic DNS Zone ARM template as a starting point. I then used what I learnt from the Export template feature to work out what to modify.

Task 16: Fixing Front-deploy.yml in the GitHub workflow

The problem

The workflow failed because the Azure file upload failed due to lack of permissions to the Blob storage.

What I wanted to try

Generating a temporary SAS token that could be generated on the fly in the workflow and immediately used for the upload.

Why did I want to try this

I wanted to be able to use Python code within my GitHub workflow to generate a parameter that would then be used in the CLI command.

Why this didn’t work

It seems there is a bug when I use this command:

az storage container generate-sas --name "$web" --https-only --permissions dlrw --output tsv --account-name websiteresumestorage88 --auth-mode login --as-user --expiry 2022-06-13T10:22Z
Enter fullscreen mode Exit fullscreen mode

There error generated was ERROR: string index out of range.

Why this appears to be a bug with Azure

  1. The exact same command runs fine on my local Windows machine with the exact same version of Azure CLI (2.37.0)
  2. I tried running the command in 3 different ways in the GitHub Actions workflow.

Why I decided to try another way to solve the problem

  1. There is nothing on the problem when you do at Google search.
  2. I have spent enough time on it
  3. I don’t think it is as secure as solely using credentials saved in GitHub secrets to do the upload.

What I wanted to try next

Adding the role "Storage Blob Data Contributor" to the existing Service Principal used by the workflow script.

Rationale

This method will not expose anything which could be used by an attacker to gain access to the blob storage (e.g. like a SAS token or account key).

Actions

  1. I looked up the RBAC details for the Service Principal already being used in the front-end deployment script. This SP already has the “contributor” role with subscription scope. I found out the clientId.
  2. I used the clientId in the following command to add the “Storage Blob Data Contributor” role to this Service Principal: az role assignment create --assignee --role "Storage Blob Data Contributor" --scope /subscriptions/

Task 17: Enabling HTTPS for the CDN Endpoint’s Custom Domain

The problem: HTTPS needs to be enabled for ‘www.’ so that there TLS certificate is managed by Microsoft and you do not need to pay for a SSL certificate.

Actions

HTTPS is enabled for a custom domain by running this command:

az cdn custom-domain enable-https -g <resource group> --profile-name <CDN profile name> --endpoint-name <CDN endpoint name> -n <custom domain in hyphenated format e.g. www-generalsettings-com>

However, when HTTPS is already enabled and you run the command, it goes through the process of re-validation and issuing the certificate from scratch, wasting quite some time (maybe 15min).

Therefore, I designed a short GitHub workflow to check if HTTPS was already enabled. If it was (or was in the process of “Enabling”), then the CLI command to enable HTTPS was skipped.

Task 18: Getting the back-end deployment jobs in the right order

Rationale

You want jobs to run concurrently where possible to save deployment time.

Actions

Stage 1 - Getting the order right

I worked out the soonest possible time each job could run (i.e. as soon as its dependencies were in place). I updated back-end.yml to reflect this order.

This is the order I worked out:

  1. ARM template validation
  2. ARM template deployment
  3. Deploy Code to App / Deploy front-end / Enable HTTP on Custom Domain
  4. Add Application Settings in App

Stage 2: Testing the GitHub Actions workflow

I checked that the back-end.yml file workflow completed successfully while the Azure resources were deployed.
I deleted the testing resource group from Azure and checked where the workflow would complete successfully and where it didn’t. Next, I examined why.

The errors

These were the errors:

  • "The Resource 'Microsoft.Storage/storageAccounts/websiteresumestorage88' under resource group 'CloudResumeChallenge88' was not found.
  • "The referenced resource '/subscriptions/***/resourceGroups/CloudResumeChallenge88/providers/Microsoft.Cdn/profiles/resumecdnprofile88/endpoints/generalsettings' was not found.
The investigation

Using the portal, I found out everything seemed to have deployed, except the endpoint under the CDN profile.

As everything deployed fine while all the resources existed, I thought the errors must have probably been related to dependencies i.e. the endpoint trying to create itself before the storage account existed.

I added dependencies to the CDN’s linked template properties in the master ARM template (azuredeploy.json), for both for the website storage account and the DNS zone. I also looked for any other dependences by tracking what parameters were passed to each linked template.

I found out that the function_app_name was passed to the linked template which creates the website storage account. I investigated why and found a file share created using the function app name (I originally was going to use the same storage account to store function app files, but then realised it would just be easier to keep them separate). This file share was now redundant.
What I did: I removed the redundant file share from the storage.json file.

What happened on git push: When the workflow was triggered, the ARM templates did not pass validation due to the error:
Deployment template validation failed: 'The resource 'Microsoft.Storage/storageAccounts/websiteresumestorage99' is not defined in the template.

Why the error? After some investigation with Google, I found out that if you want to indicate dependences (dependsOn) in a template with linked templates, you need to refer to the dependency to the linked templates.

Instead of:

"dependsOn": [
  "[resourceId(resourceGroup().name, 'Microsoft.Storage/storageAccounts', parameters('website_storage_account_name'))]",
  "[resourceId(resourceGroup().name, 'Microsoft.Network/dnszones', variables('dnszone_domain_name'))]"
Enter fullscreen mode Exit fullscreen mode

I needed:

  "dependsOn": [
  "[resourceId('Microsoft.Resources/deployments', 'linkTemplateStorage')]",
  "[resourceId('Microsoft.Resources/deployments', 'linkTemplateDnsZone')]"
Enter fullscreen mode Exit fullscreen mode
Homing in on the problem

From deploying the ARM templates individually, I worked out that the CDN profile’s endpoint would still not create custom domains unless the it found the DNS ‘CNAME’ records for the custom domains. This mean that every time a DNS Zone was created afresh, the nameservers for the DNS Zone would likely change. This meant that I needed to log into Godaddy (the domain host for my domain name) to update the domains DNS servers to modified ones. After this, I still needed to wait 30min or so for the updated DNS servers to propagate, otherwise Azure would not permit the creation of the custom domains.

Actions to fix it

I knew I would need to add a job in the back-end deployment workflow – this job would list the nameservers. The job would run as soon as the nameservers had been assigned. The soonest point at which the name servers would be assigned would be after the ARM template deployment of the DNS Zone (NS records). This CLI command was used to find out the nameservers:

az network dns record-set ns list -g <resource group> -z <domain name e.g. generalsettings.com>
Enter fullscreen mode Exit fullscreen mode

Since JSON with many irrelevant properties is returned from the command, I wrote some Python code to be run in the workflow to extract just the 4 Microsoft nameservers. It was easier to make this job into a separate YML workflow file (get-nameservers.yml) that was called from the main workflow file (back-deploy.yml).

Task 19: Making the custom domain ARM template use the master parameter file

The problem

I separated resource ARM deployment of the custom domains from the main template (azuredeploy.json). The new custom domain ARM template was named cdn-custom-domain.json. This caused a problem since I wanted only one master parameter file (azuredeploy.parameters.json). How could I make a separate (smaller) ARM template now use the master parameter file?

Actions

I managed to fix this by loading all Azure deployment parameters from the master parameter file into the ARM template to only to deploy the custom domains.

Task 20: Testing the GitHub Actions workflow again

Rationale

Having the GitHub back-end deploy workflow work when the resources were all or mostly deployed did not make sure all dependencies were created in the right order. The ultimate test would be to delete the resource group, change the resource names including resource group name within the master parameters file (azuredeploy.parameters.json), then run the back-end deployment.

The problem

The GitHub Actions back-end deployment workflow was only was successful after I had already done a separate ARM template deployments using the CLI command for the DNS Zone and CDN profile.

Actions

After quite some testing, I isolated the problem: I could only deploy the custom domains (for the CDN profile’s endpoint) and enable HTTPS for the www domain after the correct Azure name servers were configured with my domain name service provider.

Since the Azure name servers for my DNS Zone seemed to vary with every new deployment, I separated the deployment of the CDN custom domains and put this in its own ARM template. Only after the custom domain deployment for the apex domain, could the HTTPS be enabled on it using Azure CLI in the GitHub Action workflow.

Summary of final workflow

The GitHub Action workflow will trigger the back-end deployment once it receives a Git commit to the GitHub repo. (Triggering the back-end deployment will also deploy the front-end deployment, as a change in the back-end may affect the front-end deployment).

Fresh deployment

Note: This is what happens when the back-end workflow is triggered and it is run for the first time making a new resource group and all the resources within it.

  1. The GitHub Action workflow will fail when it gets up to deploying the CDN custom domains, however the Azure name servers will be displayed
  2. I need to set the Azure name servers with my domain name host
  3. Re-run the GitHub Action workflow – this time it will run through all steps successfully.

Re-deployments

Any commit to back-end files after this, will cause the GitHub Action workflow to be successful.

Any commit on the front-end files will trigger just the front-end GitHub Action workflow which only redeploys the front-end files (i.e. uploads them to Blob storage).

Final GitHub Action workflow

This is the final order I worked out for the GitHub Action workflow to deploy all resources (see image below):

1) ARM template validation
2) ARM template deployment (for everything but the CDN endpoint’s custom domains)
3a) Deploy Code to App
3b) Deploy front-end
3c) Print Name Servers
4a) Add Application Settings in App (depends on 3a)
4b) Deploy custom domains (depends on 3c)
5) Enable HTTP on Custom Domain (depends on 4b)

Image description

Top comments (0)