<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karl Cooke</title>
    <description>The latest articles on DEV Community by Karl Cooke (@karl_itnerd).</description>
    <link>https://dev.to/karl_itnerd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karl_itnerd"/>
    <language>en</language>
    <item>
      <title>My Cloud Native Adventure - Part 1</title>
      <dc:creator>Karl Cooke</dc:creator>
      <pubDate>Sat, 29 Jan 2022 10:11:20 +0000</pubDate>
      <link>https://dev.to/karl_itnerd/my-cloud-native-adventure-part-1-55ab</link>
      <guid>https://dev.to/karl_itnerd/my-cloud-native-adventure-part-1-55ab</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Hy3_A2gs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/cloud-native-adventure-1/CNA_Header_Image.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Hy3_A2gs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/cloud-native-adventure-1/CNA_Header_Image.png" alt="Cloud Native Adventure" width="880" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to part 1 of My Cloud Native Adventure. If you read my &lt;strong&gt;&lt;a href="https://irishtechie.cloud/reviewing-2021-and-looking-forward/"&gt;last article&lt;/a&gt;&lt;/strong&gt;, looking back over 2021 and forward to 2022, you will know that I was embarking on a cloud native learning journey. I want to write about my learning adventures under the title &lt;strong&gt;&lt;a href="https://irishtechie.cloud/categories/my-cloud-native-adventure/"&gt;My Cloud Native Adventure&lt;/a&gt;&lt;/strong&gt;. I’ll be keeping all of the articles together in a collection that can be accessed by clicking the previous link.&lt;/p&gt;

&lt;p&gt;The first step in my learning for this year is Linux. I’ve used Linux intermittently over the years and know enough to do some damage if I have to but, I really want to improve my Linux skills this year. Linux is such an integral part of the public cloud so, first up had to be a refresher in Linux. I’ll talk more about the resources I’m using shortly but…&lt;/p&gt;

&lt;h2&gt;
  
  
  First things first
&lt;/h2&gt;

&lt;p&gt;As with most things I write about, there is an Azure twist to my Cloud Native learning. Every time I need it, I deploy an Azure VM running Ubuntu Server so I can practice the things I’m learning. I could use my local WSL2 implementation but I don’t want to break it as I use it every day.&lt;/p&gt;

&lt;p&gt;I use the &lt;strong&gt;&lt;a href="https://docs.microsoft.com/en-us/cli/azure/what-is-azure-cli"&gt;Azure CLI&lt;/a&gt;&lt;/strong&gt; commands below to deploy/destroy the Azure VM as needed. If you need to install Azure CLI you can start &lt;strong&gt;&lt;a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli"&gt;here&lt;/a&gt;&lt;/strong&gt;. If you don’t have an Azure subscription handy, you can sign up for free &lt;strong&gt;&lt;a href="https://azure.microsoft.com/en-gb/free/"&gt;here&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can access the full script in my &lt;strong&gt;&lt;a href="https://github.com/irishtechie/My_Cloud-Native_Adventure"&gt;GitHub Repo&lt;/a&gt;&lt;/strong&gt;. Give it a star as there will be plenty of updates here as the adventure progresses!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Set variables you will need below
resourceGroupName=rg-learnlinux
vmName=vm-learnlinux
adminUsername=sysadmin
vnetName=vnet-learnlinux
vnetAddressSpace=172.16.0.0/16
snetName=snet-learnlinux
snetAddressSpace=172.16.16.0/24
nsgName=nsg-learnlinux
nsgSSHRuleName=SSH_Secure
vmImage=UbuntuLTS
sshPublicKeyPath=/home/&amp;lt;username&amp;gt;/.ssh/&amp;lt;keyname&amp;gt;.pub
location=northeurope
myPublicIP=$(curl ifconfig.me) # Obtain your public IP address and place it in a variable.

# Create an SSH Key Pair for use when creating your new linux VM.
# If you are using Linux on WSL2 or natively, you will be able to use 'ssh-keygen' out of the box. 
# For Windows 10 and above 'ssh-keygen' is also built in and can be used via the Command Line or PowerShell.

ssh-keygen -m PEM -t rsa -b 4096
# The above command will give you some interactive prompts for naming and password protecting your key pair.
# Remember to place the key pair into your '.ssh' directory.

# Use to log in to your Azure tenant. This version will open a browser window and ask you to sign in.
az login 

# Use if, like me, you are logged into multiple tenants in multiple browsers. This will give you a link and code to use instead.
az login --use-device-code 

# Use if you have access to multiple subscriptions with a single login.
az account show

# Set the active subscription. Similar to Set-AzContext in PowerShell.
az account set --subscription &amp;lt;SubscriptionID&amp;gt;

# Create Resource Group.

az group create --location $location --name $resourceGroupName

# Create Nework Security Group and add SSH rule.

az network nsg create \
  --name $nsgName \
  --resource-group $resourceGroupName

az network nsg rule create \
  --nsg-name $nsgName \
  --resource-group $resourceGroupName \
  --name $nsgSSHRuleName \
  --description "Secure SSH Access to my Public IP address only" \
  --priority 1000 \
  --access Allow \
  --direction Inbound \
  --protocol Tcp \
  --source-address-prefixes $myPublicIP \
  --destination-port-ranges 22

# Create a Virtual Network for your test VM

az network vnet create \
  --name $vnetName \
  --resource-group $resourceGroupName \
  --address-prefixes $vnetAddressSpace \
  --subnet-name $snetName \
  --subnet-prefixes $snetAddressSpace

# Add NSG to Subnet

az network vnet subnet update \
  --name $snetName \
  --vnet-name $vnetName \
  --resource-group $resourceGroupName \
  --network-security-group $nsgName 

# Create your test VM and generate SSH Key Pair for access

az vm create \
  --resource-group $resourceGroupName \
  --name $vmName \
  --image $vmImage \
  --vnet-name $vnetName \
  --subnet $snetName \
  --nsg "" \
  --admin-username $adminUsername \
  --ssh-key-values $sshPublicKeyPath \
  --output json

# You will get a JSON object back similar to the one detailed in the README.

# Save money by deallocating the VM between study sessions or deleting your resources when you're done. 

# Deallocate your VM. The only thing you'll get charged for is storage.

az vm deallocate \
  --resource-group $resourceGroupName \
  --name $vmName

# Start VM when you're ready to study.

az vm start \
  --resource-group $resourceGroupName \
  --name $vmName

# If you delete the resource group you will delete all resources contained within it.

az group delete --name $resourceGroupName

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Let’s talk resources
&lt;/h2&gt;

&lt;p&gt;Now that you know how I spin up a temporary VM for learning on, let’s talk about the resources I am using for the Linux refresher.&lt;/p&gt;

&lt;p&gt;First up, is the &lt;strong&gt;&lt;a href="https://training.linuxfoundation.org/training/introduction-to-linux/"&gt;Introduction to Linux&lt;/a&gt;&lt;/strong&gt; from &lt;strong&gt;&lt;a href="https://www.linuxfoundation.org/"&gt;The Linux Foundation&lt;/a&gt;&lt;/strong&gt;. Delivered via the edX learning platform, this is a great course for anyone just getting started with Linux or, like me, anyone who needs a refresher. It’s a great place to start. It’s a free course, which is great! You can upgrade by paying a fee if you want to complete assessments and receive a certificate at the end of the course. The free version is fine for my needs.&lt;/p&gt;

&lt;p&gt;Next up is &lt;strong&gt;&lt;a href="https://www.amazon.co.uk/Hackers-Getting-Networking-Scripting-Security/dp/1593278551"&gt;Linux Basics for Hackers&lt;/a&gt;&lt;/strong&gt;. This is another great resource and is available on Amazon UK for about £20. While aimed at aspiring security cyber security professionals there is plenty of information here to help anyone wanting to improve their Linux knowledge&lt;/p&gt;




&lt;h2&gt;
  
  
  Less talk more learn
&lt;/h2&gt;

&lt;p&gt;The first few weeks of my Linux refresher has been primarily refamiliarising myself with key terms, commands, and the shell. I have documented a few of those key items below. I will also keep an updated list in the repo mentioned above. Look for files called &lt;strong&gt;&lt;a href="https://github.com/irishtechie/My_Cloud-Native_Adventure/blob/main/Linux/key_terms.md"&gt;key_terms.md&lt;/a&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;a href="https://github.com/irishtechie/My_Cloud-Native_Adventure/blob/main/Linux/key_cmds.md"&gt;key_cmds.md&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; If you are coming from a Windows background, you most likely haven’t ever cared much about case sensitivity in the command line, PowerShell, or the filesystem. When using Linux, you will soon realise that it cares very much about case sensitivity. When running commands in the shell (or in Bash scripts) in particular, you need to pay attention to casing for items like command switches, file paths, and file names.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Terms
&lt;/h3&gt;

&lt;p&gt;Key terms to know as you start your Linux learning journey. Keep an eye on the markdown file mentioned above as I will be keeping it updated as I progress with the refresher.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Open Source Software (OSS):&lt;/strong&gt; Open Source is a software development model where you create and develop your software &lt;em&gt;in the open&lt;/em&gt;. This encourages open collaboration and free access to software. This means that anyone can review and contribute to the software, which can foster greater public trust than closed-source or proprietary software.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distribution:&lt;/strong&gt; A distribution is a collection of software and applications that make up an Operating System based on the Linux Kernel. Distributions can be separated into &lt;em&gt;‘families’&lt;/em&gt; based on the upstream parent that the OS is based on. An example of a distribution family would be Debian with Ubuntu and PopOS being members of the Debian family.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kernel:&lt;/strong&gt; A Kernel is the foundation layer of your operating system which handles the interactions between the computer hardware and the software running within the operating system. All Linux distributions are based on the Linux Kernel which was first created in 1991, by Linux Torvalds. &lt;em&gt;(ref: &lt;a href="https://en.wikipedia.org/wiki/Linux_kernel"&gt;Wikipedia&lt;/a&gt;)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Boot Loader:&lt;/strong&gt; A boot loader is a program that starts first and loads the kernel into memory. The kernel then starts the rest of the operating system. GRUB is the most common boot loader for Linux distributions. See ref. for more details. &lt;em&gt;(ref: &lt;a href="https://itsfoss.com/what-is-grub/"&gt;itsfoss.com&lt;/a&gt;)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FHS (Filesystem Hierarchy Standard):&lt;/strong&gt; The FHS defines the directory structure for Linux OS'. The FHS is maintained by &lt;strong&gt;The Linux Foundation&lt;/strong&gt;. See the image below and the reference for more information on the FHS. &lt;em&gt;(ref: &lt;a href="https://linuxconfig.org/filesystem-basics"&gt;linuxconfig.org/filesystem-basics&lt;/a&gt;)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z2_CUJ8m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/cloud-native-adventure-1/fhs_dirtree.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z2_CUJ8m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/cloud-native-adventure-1/fhs_dirtree.jpg" alt="Filesystem Hierarchy standard" width="833" height="676"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Working directory:&lt;/strong&gt; Your working directory is the directory in which you are performing tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Absolute path:&lt;/strong&gt; An absolute path is a file or directory path that always starts in the root directory. For example, the absolute path to my home directory is ‘/home/kcooke’. An absolute path always starts with ‘/’.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relative path:&lt;/strong&gt; A relative path always starts in your current working directory and never starts with ‘/’.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shell:&lt;/strong&gt; A shell is the command line interface in a Linux system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bash:&lt;/strong&gt; Bash or Bourne-Again Shell is a shell language that operates on Linux, macOS, and BSD systems. You can use bash to issues single commands or to author powerful scripts to automate repetitive tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Commands
&lt;/h3&gt;

&lt;p&gt;You know the drill by now! :) See below for key commands to be familar with as you get to grips with Linux.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;man:&lt;/strong&gt; One of the most important things to know when learning a new operating system is how to access information on the various commands. The ‘man’ command shows the reference manual for specific commands. It usually contains a description, syntax, and examples. It’s a great place to start when using a command for the first time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ls:&lt;/strong&gt; Use ‘ls’ to list directory contents. Use ‘man ls’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cat:&lt;/strong&gt; Use ‘cat’ to concatenate files and, also, to print the contents of file in the terminal. Use ‘man cat’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cd:&lt;/strong&gt; Use ‘cd’ to change your current working directory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;whoami:&lt;/strong&gt; Use ‘whoami’ to show details on the currently logged in user. Use ‘man whoami’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hostname:&lt;/strong&gt; Use ‘hostname’ to show system hostname. Use ‘man hostname’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pwd:&lt;/strong&gt; Use ‘pwd’ to print name of current working directory. Use ‘man pwd’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;systemctl:&lt;/strong&gt; Use ‘systemctl’ to control the systemd system and service manager. Use ‘man systemctl’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;touch:&lt;/strong&gt; Use ‘touch’ to change timestamps on a file. However it’s most common use is to create a new empty file. For example ‘touch file.md’ will create a new file called file.md if one doesn’t already exist. Use ‘man touch’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mkdir:&lt;/strong&gt; Use ‘mkdir’ to create new directories. Use ‘man mkdir’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rmdir:&lt;/strong&gt; Use ‘rmdir’ to remove empty directories. Use ‘man rmdir’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rm:&lt;/strong&gt; Use ‘rm’ to remove files and/or directories. Use ‘man rm’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;cp:&lt;/strong&gt; Use ‘cp’ to copy files and directories. User ‘man cp’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mv:&lt;/strong&gt; Use ‘mv’ to move or rename files and directories. User ‘man mv’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;grep:&lt;/strong&gt; Use ‘grep’ to search for patterns. This is a very useful utility when you get to know it. Use ‘man grep’ to get more information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;sudo:&lt;/strong&gt; Use ‘sudo’ to execute a command as the superuser or as another user. Use ‘man sudo’ to get more information.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Please Note:&lt;/strong&gt; I am not affiliated with any of the organisations or businesses linked above. These are my recommendations, I have not been paid to talk about any of the above.&lt;/p&gt;

&lt;p&gt;Thank you for taking the time to read this article. Feel free to reach out on social media if you want to chat or discuss any of this further.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cloudnative</category>
      <category>linux</category>
      <category>learning</category>
    </item>
    <item>
      <title>Multiple Azure Subs One Github Action</title>
      <dc:creator>Karl Cooke</dc:creator>
      <pubDate>Thu, 25 Nov 2021 19:14:58 +0000</pubDate>
      <link>https://dev.to/karl_itnerd/multiple-azure-subs-one-github-action-3230</link>
      <guid>https://dev.to/karl_itnerd/multiple-azure-subs-one-github-action-3230</guid>
      <description>&lt;p&gt;One of the benefits of working for an Azure MSP is that no two days are the same. You are constantly presented with different challenges to solve. It’s a lot of fun! I decided to write about how we solved one particular challenge below.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;You’re working on a project that requires you to automatically deploy resources to Azure when a new branch is created in your GitHub repository. In this use case, you’re working with a company that is creating Azure Functions to bridge the gap between Azure and their software platform. Every time they want to test some tweaks or changes, a new branch is created. To support this testing, they need to create an Azure Function and a few other supporting resources. This can be time consuming when done manually. This is where GitHub Actions Workflows or Azure DevOps Pipelines come in. To complicate matters, you are following some best practice advice that you should keep your Production, Development, and Test environments in separate subscriptions.&lt;/p&gt;

&lt;p&gt;In this article, we will focus on how to create a workflow in GitHub Actions that logs into the correct Azure subscription, specifically the Dev or Test subscriptions, based on a branch naming convention. The naming convention states that they must use ‘dev’ or ‘test’ in their branch names so that we can ensure that the resources are created in the correct Azure subscriptions.&lt;/p&gt;

&lt;p&gt;If you’re new to &lt;a href="https://docs.github.com/en/actions"&gt;GitHub Actions&lt;/a&gt; or &lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/?view=azure-devops"&gt;Azure DevOps Pipelines&lt;/a&gt;, click the links to find out more.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;Firstly, a big thank you to &lt;a href="https://twitter.com/samsmithnz"&gt;Sam Smith&lt;/a&gt; who answered a query on Twitter about how to reference the branch name correctly in a conditional step in the workflow.&lt;/p&gt;

&lt;p&gt;In order for the workflow to successfully log in to the Azure subscriptions, you will need to make sure that you have created a Service Principal with the appropriate permissions. Once the Service Principal has been created you can add the details to a repository secret on GitHub. You can find out more about how to create this Service Principal on the ‘&lt;a href="https://github.com/Azure/login"&gt;Azure/Login&lt;/a&gt;’ GitHub repo. Once created, you secrets might look like the below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bZUGrczg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/ghaction1/gh_secrets.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bZUGrczg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/ghaction1/gh_secrets.PNG" alt="GitHub Secrets" width="292" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see the workflow YAML file below. I’ve included some annotations to help if you’re unfamilar with workflow files in GitHub Actions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
name: New Branch Trigger

on:
  create: # Triggers on the creation of a new branch (or release).

  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest # Hosted build agent that our new action runs on.

    steps:
      - uses: actions/checkout@v2 # Checks out our GitHub repo.

      - name: Git default branch name
        run: echo $GITHUB_REF

      - name: Git branch name
        id: git-branch-name
        uses: EthanSK/git-branch-name-action@v1 # Action to grab the current branch name.

      - name: Echo the branch name
        run: echo "Branch name is ${GIT_BRANCH_NAME}"

      - name: Login via Az module (DEV)
        uses: azure/login@v1 # The name of the action we are using.
        with:
          creds: ${{secrets.AZURE_CREDENTIALS_DEV}} # Reference to the secret we created earlier.
          enable-AzPSSession: true 
        if: contains(github.ref, 'dev') # Conditional check. 
        # Checks if the Github branch contains 'dev'. This step will only run if this is TRUE.

      - name: Login via Az module (TEST)
        uses: azure/login@v1
        with:
          creds: ${{secrets.AZURE_CREDENTIALS_TEST}}
          enable-AzPSSession: true 
        if: contains(github.ref, 'test') # Conditional check. 
        # Checks if the Github branch contains 'test'. This step will only run if this is TRUE.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition to the &lt;strong&gt;‘Azure/Login’&lt;/strong&gt; action that I mentioned above, I am also using an action called &lt;strong&gt;‘git-branch-name-action’&lt;/strong&gt;. It grabs the current branch name and uses it to create an environment variable. I like this action as it just grabs the branch name whereas the default &lt;strong&gt;'$GITHUB_REF'&lt;/strong&gt; environment variable outputs it in the format &lt;strong&gt;‘refs/heads/branchname’&lt;/strong&gt;. This is useful as I want to use the branch name variable in the naming of the Azure resources this workflow goes on to create. You can find out more about this action in the GitHub Marketplace &lt;a href="https://github.com/marketplace/actions/git-branch-name"&gt;here&lt;/a&gt;. My thanks to creator &lt;a href="https://github.com/EthanSK"&gt;Ethan Sarif-Kattan&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;I’ve included some screenshots of the workflow running and triggering different steps depending on the branch name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow run - Dev Branch&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the image below, the branch in this run contained ‘dev’ in the name which meant it triggered the &lt;strong&gt;‘Login via Az Module (Dev)'&lt;/strong&gt; step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SLW9ua9H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/ghaction1/Dev_Run.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SLW9ua9H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/ghaction1/Dev_Run.png" alt="Workflow Run - Dev" width="425" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow run - Test Branch&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the image below, the branch in this run contained ‘test’ in the name which meant it triggered the &lt;strong&gt;‘Login via Az Module (Test)'&lt;/strong&gt; step.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EmR9trlA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/ghaction1/Test_Run.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EmR9trlA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/ghaction1/Test_Run.PNG" alt="Workflow Run - Test" width="424" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’ve been following along, you now have a GitHub Action workflow that will be triggered when you create a new branch in your repository. It will also deploy your resources to the correct Azure subscription thanks to the conditional Azure login steps. Check back in the coming weeks for articles where I expand on my use of this workflow.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;PSA:&lt;/strong&gt;  &lt;strong&gt;&lt;em&gt;Remember&lt;/em&gt;&lt;/strong&gt; , it’s &lt;strong&gt;NOT&lt;/strong&gt; a good idea to experiment on a production environment. Please make sure you test any recommendations you read online in a Dev or Test environment before going to production.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thank you for taking the time to read this article. I hope you found it useful. If you did, please feel free to share on your social media and tag me. You can find my social media handles above or to the left, depending on what device you are reading this on.&lt;/p&gt;




</description>
      <category>azure</category>
      <category>microsoft</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>Control Public IP for Azure Database Migration Service</title>
      <dc:creator>Karl Cooke</dc:creator>
      <pubDate>Thu, 25 Nov 2021 19:09:15 +0000</pubDate>
      <link>https://dev.to/karl_itnerd/control-public-ip-for-azure-database-migration-service-1lg2</link>
      <guid>https://dev.to/karl_itnerd/control-public-ip-for-azure-database-migration-service-1lg2</guid>
      <description>&lt;p&gt;Lately, I’ve been working with some customers to automate the migration of their services from another cloud provider to Azure. Quite often the services that need to be migrated include SQL Databases, for example MS SQL, MySQL, or PostgreSQL. In most cases, when there are databases to be migrated we make use of the Azure Database Migration Service. You can find out more about it &lt;a href="https://docs.microsoft.com/en-gb/azure/dms/"&gt;here&lt;/a&gt;. This link is a great jumping off point for information, guides, and tutorials about Azure Database Migration Service. At the time of writing DMS supports migration of Microsoft SQL, MySQL, PostgreSQL, MongoDB, and Oracle.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Quite often, when migrating services from another cloud provider or on-premises environment, you have to contend with firewalls and allow lists. This is to be expected of any well-managed environment whether that be in the cloud or in your on-premises locations. This presents us with a problem when it comes to using DMS. DMS doesn’t have a static public IP address for outbound communication. With the default setup, we can’t guarantee what IP address the service will use when it tries to make contact with a database that doesn’t reside within the Azure environment or at the other end of a VPN or ExpressRoute connection.&lt;/p&gt;

&lt;p&gt;When you spin up Azure DMS for the first time, you will be asked to pick an existing virtual network and subnet or to create a new one. The Database Migration Service will use this virtual network and subnet to communicate with source and target databases. It achieves this by placing a NIC (Network Interface Card) resource on the chosen subnet.&lt;/p&gt;

&lt;p&gt;As you can see in the image below, the service has been assigned the next available private IP address on the network in the same way as would happen if you created a virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PNpS5OLB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/dms_overview.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PNpS5OLB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/dms_overview.PNG" alt="DMS Overview" width="880" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Not the solution
&lt;/h2&gt;

&lt;p&gt;As mentioned above, the DMS resource uses a NIC to attach itself to the virtual network. During my initial investigations, I attempted to update the NIC configuration to include a public IP address but, unfortunately, you get a not found error similar to the below if you try to access the IP configuration for this NIC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VViN7gLk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/NIC_IP.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VViN7gLk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/NIC_IP.PNG" alt="NIC IP Config" width="880" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yXJlIcfU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/NIC_IP_Config_NotFound.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yXJlIcfU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/NIC_IP_Config_NotFound.PNG" alt="NIC IP Config" width="880" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can try this yourself by locating the NIC that is tagged with the DMS resource ID.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;The above failure got me thinking about what other Azure resources might be able to help me take control of the public IP address that DMS uses. That’s when it occurred to me that, if the traffic destined for the public internet traverses the virtual network first, we might be able to use NAT Gateway to control that outbound traffic as it leaves the virtual network. So I decided to do an experiment!! You can find out more about NAT Gateway &lt;a href="https://docs.microsoft.com/en-us/azure/virtual-network/nat-gateway/nat-overview"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I already had my DMS instance set up from the above attempt, so all I needed to add was a SQL Database instance (in this case, I opted for Azure Database for MySQL) and a NAT Gateway with Public IP address. Under normal circumstances, I would configure some private connectivity to the database for resources that already exist in Azure but, for the purposes of this experiment, we are &lt;em&gt;pretending&lt;/em&gt; that the database resides in another public cloud provider.&lt;/p&gt;

&lt;p&gt;First off, I attempted to connect to my new MySQL instance with DMS. As you can see from the image below, the connection was rejected as the dynamic public IP address isn’t in the allow list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ISQrzNRx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/mysql_connerror1.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ISQrzNRx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/mysql_connerror1.PNG" alt="MySQL Connection Error" width="410" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To prove my point, I rebooted the DMS resource so that it picked up a new dynamic public IP address. As you can see below, we are still failing but with a different IP in the error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FlmSxUdK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/mysql_connerror2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FlmSxUdK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/mysql_connerror2.PNG" alt="MySQL Connection Error" width="416" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, I updated the virtual network/subnet configuration to incorporate the NAT Gateway I had created previously. You can see the public IP address that I had assigned it in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2cgcmbww--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/NATGW_PIP.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2cgcmbww--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/NATGW_PIP.png" alt="NAT Gateway Public IP" width="831" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All internet bound traffic is now routing through the NAT Gateway and NAT’ing behind the public IP address you can see above. The only thing left to do is to run a final test connection to our &lt;em&gt;pretend&lt;/em&gt; source database server. I have deliberately kept the new IP off the ‘Allow List’ so that we can see the IP reported in the error. As you can see in the connection error below, we are now connecting via the new NAT Gatewat public IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--X4l1XrJA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/mysql_connerror3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--X4l1XrJA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/mysql_connerror3.PNG" alt="MySQL Connection Error" width="405" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  In Summary
&lt;/h2&gt;

&lt;p&gt;We’ve been able to control the outbound public IP address that Azure Database Migration Service uses to connect to Databases external to Azure for migration purposes. All we needed to do was add a NAT Gateway in to control outbound traffic. If you follow the path I did, you might end up with a solution that looks similar to the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QZrnQ7Md--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/NATGW_DIag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QZrnQ7Md--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://irishtechie.cloud/images/blogs/azdms/NATGW_DIag.png" alt="NAT Gateway Diagram" width="608" height="663"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;PSA:&lt;/strong&gt;  &lt;strong&gt;&lt;em&gt;Remember&lt;/em&gt;&lt;/strong&gt; , it’s &lt;strong&gt;NOT&lt;/strong&gt; a good idea to experiment on a production environment. Please make sure you test any recommendations you read online in a Dev or Test environment before going to production.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thank you for taking the time to read this article. I hope you found it useful. If you did, please feel free to share on your social media and tag me. You can find my social media handles above or to the left, depending on what device you are reading this on.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>database</category>
      <category>networking</category>
      <category>migration</category>
    </item>
  </channel>
</rss>
