<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrew</title>
    <description>The latest articles on DEV Community by Andrew (@ghostinthewire5).</description>
    <link>https://dev.to/ghostinthewire5</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ghostinthewire5"/>
    <language>en</language>
    <item>
      <title>🚀 Copilot Velocity Hub - Transform Your Dev Workflow with AI</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Fri, 13 Mar 2026 14:41:57 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/copilot-velocity-hub-transform-your-dev-workflow-with-ai-4i8p</link>
      <guid>https://dev.to/ghostinthewire5/copilot-velocity-hub-transform-your-dev-workflow-with-ai-4i8p</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yf236t2e4hg0irh83tg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yf236t2e4hg0irh83tg.png" alt="Copilot Velocity Hub Header" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🎬 Live Project
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Repository&lt;/strong&gt;: &lt;a href="https://github.com/ghostinthewires/copilot-velocity-hub" rel="noopener noreferrer"&gt;GitHub - copilot-velocity-hub&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;After seeing several posts about Copilot Plugin Marketplaces following &lt;a href="https://docs.github.com/en/copilot/how-tos/copilot-cli/customize-copilot/plugins-marketplace" rel="noopener noreferrer"&gt;this post&lt;/a&gt; from GitHub, I decided to see what all the fuss was about. I mostly Vibe Coded a &lt;strong&gt;production-ready Next.js application&lt;/strong&gt; that transforms GitHub Copilot CLI into an intuitive, beautifully designed productivity hub. Instead of wrestling with terminal commands, developers can now access powerful AI-assisted tools through a modern web interface with real-time output streaming.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎯 The Problem I Solved
&lt;/h3&gt;

&lt;p&gt;GitHub Copilot CLI is incredibly powerful, but it requires terminal fluency. Copilot Velocity Hub makes these capabilities accessible to developers of all skill levels with an elegant, responsive web application that executes complex tasks with a single click.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✨ Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🎨 Beautiful Modern UI&lt;/strong&gt; - Production-grade SaaS design with smooth animations and terminal-style output display&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;⚡ Fast &amp;amp; Responsive&lt;/strong&gt; - Built with Next.js 16, React 19, and TailwindCSS 4&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🔌 4 Powerful Built-in Plugins&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;📄 &lt;strong&gt;Generate README&lt;/strong&gt; - Create comprehensive, well-structured project documentation instantly&lt;/li&gt;
&lt;li&gt;🔗 &lt;strong&gt;Summarize Git Commits&lt;/strong&gt; - Analyze git history and generate professional changelogs&lt;/li&gt;
&lt;li&gt;🧪 &lt;strong&gt;Generate Test Cases&lt;/strong&gt; - Scaffold unit and integration tests following best practices&lt;/li&gt;
&lt;li&gt;🔍 &lt;strong&gt;Repository Review&lt;/strong&gt; - Comprehensive code quality, security, and architecture audits&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;🔒 Security First&lt;/strong&gt; - No command injection vulnerabilities, strict whitelist validation, sandboxed execution&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;📦 Extensible&lt;/strong&gt; - Add new plugins in 2 minutes with minimal code changes&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;🧪 Production Ready&lt;/strong&gt; - Full TypeScript, comprehensive error handling, Jest test suite, security hardened&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  📸 The Experience
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Homepage - Clean &amp;amp; Intuitive
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuy8ne3fk47pp36gntnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuy8ne3fk47pp36gntnp.png" alt="Copilot Plugin Dashboard" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Beautiful Terminal-Style Output
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2urisydig4do0hxzqxh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2urisydig4do0hxzqxh.png" alt="Terminal Output" width="593" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  🚀 Getting Started (2 Minutes)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Clone the repository&lt;/span&gt;
git clone https://github.com/ghostinthewires/copilot-velocity-hub.git
&lt;span class="nb"&gt;cd &lt;/span&gt;copilot-velocity-hub

&lt;span class="c"&gt;# 2. Install dependencies&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# 3. Ensure Copilot CLI is authenticated&lt;/span&gt;
copilot auth login

&lt;span class="c"&gt;# 4. Start development server&lt;/span&gt;
npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then open &lt;strong&gt;&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/strong&gt; and start boosting your productivity!&lt;/p&gt;

&lt;h3&gt;
  
  
  📋 Requirements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Node.js&lt;/strong&gt; 18.0.0+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;npm&lt;/strong&gt; 9.0.0+&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git&lt;/strong&gt; (latest)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Copilot CLI&lt;/strong&gt; (installed globally: &lt;code&gt;npm install -g @github-copilot/cli&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Account&lt;/strong&gt; (for Copilot authentication)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Quick verify:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="nt"&gt;--version&lt;/span&gt;          &lt;span class="c"&gt;# v18.0.0+&lt;/span&gt;
npm &lt;span class="nt"&gt;--version&lt;/span&gt;           &lt;span class="c"&gt;# 9.0.0+&lt;/span&gt;
copilot &lt;span class="nt"&gt;--version&lt;/span&gt;       &lt;span class="c"&gt;# 1.x+&lt;/span&gt;
copilot auth login      &lt;span class="c"&gt;# Authenticate once&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🎯 Why I Chose Copilot CLI
&lt;/h3&gt;

&lt;p&gt;I wanted to showcase &lt;strong&gt;how powerful Copilot CLI becomes when integrated into a real-world application&lt;/strong&gt;. Rather than building a simple demo, I created a production-grade platform that brings AI-assisted development to the entire team, regardless of terminal proficiency.&lt;/p&gt;

&lt;p&gt;The key insight: Copilot CLI is powerful, but buried in the terminal. What if we gave it a beautiful, user-friendly interface that the entire team could use?&lt;/p&gt;

&lt;h3&gt;
  
  
  💡 How I Used Copilot CLI
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. &lt;strong&gt;Direct Process Execution&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The application executes Copilot CLI commands securely:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;exec&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;child_process&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;executePlugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;promptText&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;exec&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`copilot prompt "&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;promptText&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
      &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;maxBuffer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30000&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;stdout&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. &lt;strong&gt;Specialised Prompts for Each Plugin&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Each plugin sends carefully crafted prompts:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generate README Plugin:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Analyze this project and create a comprehensive README.md file that includes:
- Project description and purpose
- Features overview
- Installation instructions
- Usage examples
- Contributing guidelines
- License information

Format the output as valid Markdown.`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Summarize Commits Plugin:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Review the recent git commit history and generate a structured CHANGELOG that:
- Groups changes by category (Added, Changed, Fixed, Removed)
- Highlights breaking changes
- Includes commit references
- Follows semantic versioning conventions

Make it suitable for release notes.`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Generate Tests Plugin:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Analyze the codebase and generate comprehensive test cases that:
- Cover happy path scenarios
- Include edge case handling
- Test error conditions
- Follow Jest best practices
- Are well-documented with clear assertions`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Repository Review Plugin:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Perform a detailed code review of this repository covering:
- Code quality and architecture
- Security vulnerabilities
- Performance opportunities
- Testing coverage gaps
- Best practices compliance
- Refactoring suggestions

Prioritize by impact and severity.`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  3. &lt;strong&gt;Real-Time Output Streaming&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Users see results appear instantly in a beautiful terminal-style interface as Copilot processes each prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  🎨 Why This Approach Works
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;For Individual Developers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Quick access to AI-powered code analysis without CLI commands&lt;/li&gt;
&lt;li&gt;✅ Beautiful output display makes results easy to read&lt;/li&gt;
&lt;li&gt;✅ Copy-to-clipboard makes integration seamless&lt;/li&gt;
&lt;li&gt;✅ All available at a single web URL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For Teams:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Standardised AI-assisted workflows across the team&lt;/li&gt;
&lt;li&gt;✅ No terminal knowledge required&lt;/li&gt;
&lt;li&gt;✅ Audit trail of AI suggestions&lt;/li&gt;
&lt;li&gt;✅ Easy to add team-specific plugins&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For Organisations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Can be deployed as internal tool&lt;/li&gt;
&lt;li&gt;✅ Consistent code quality practices&lt;/li&gt;
&lt;li&gt;✅ Reduced security review time&lt;/li&gt;
&lt;li&gt;✅ Accelerated documentation generation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🚀 Impact on Development Experience
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Dramatically Improved&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Accessibility&lt;/strong&gt;: Non-terminal users now leverage Copilot CLI effectively&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Efficiency &amp;amp; Speed&lt;/strong&gt;: 2-minute setup, Plugins execute 3-5 complex tasks quickly&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Quality&lt;/strong&gt;: AI-assisted documentation, testing, and reviews improve consistency&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Scalability&lt;/strong&gt;: Easy to add new plugins without core code modifications&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Safety&lt;/strong&gt;: Robust error handling and security validation prevent issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What Copilot CLI Enabled&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Professional Documentation&lt;/strong&gt; - Generate README files that meet industry standards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intelligent Changelogs&lt;/strong&gt; - Structured release notes from commit analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive Test Suites&lt;/strong&gt; - Coverage suggestions and test scaffolding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart Code Reviews&lt;/strong&gt; - Architecture, security, and quality recommendations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team Alignment&lt;/strong&gt; - Consistent coding standards through AI analysis&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  🎓 Key Learning Journey
&lt;/h3&gt;

&lt;p&gt;Building Copilot Velocity Hub taught me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Process Management&lt;/strong&gt;: Safely integrating CLI tools into web applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: Preventing command injection while enabling CLI integration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User Experience&lt;/strong&gt;: Making complex AI outputs accessible and beautiful&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plugin Architecture&lt;/strong&gt;: Designing extensible systems for easy feature addition&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modern Web Stack&lt;/strong&gt;: Next.js 16, React 19, TypeScript, and testing best practices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Handling&lt;/strong&gt;: Graceful degradation and user-friendly error messages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ⚡ Performance &amp;amp; Reliability
&lt;/h3&gt;

&lt;p&gt;The application is optimised for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;30-second timeout&lt;/strong&gt; on plugin execution (prevents hanging processes)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10MB output buffer&lt;/strong&gt; (handles large analysis outputs safely)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time streaming&lt;/strong&gt; (users see progress immediately)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error recovery&lt;/strong&gt; (graceful degradation if Copilot CLI fails)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast builds&lt;/strong&gt; (&amp;lt;3 seconds with Next.js)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responsive UI&lt;/strong&gt; (page loads in &amp;lt;500ms)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔐 Security Considerations
&lt;/h3&gt;

&lt;p&gt;I implemented multiple layers of protection:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Whitelist Validation&lt;/strong&gt; - Only registered plugins can execute&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No User Input&lt;/strong&gt; - Prompts are hardcoded per plugin (no injection risk)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process Isolation&lt;/strong&gt; - Child processes run with strict resource limits&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Buffer Limits&lt;/strong&gt; - Prevents memory exhaustion from large outputs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error Containment&lt;/strong&gt; - Safe error messages without sensitive data leakage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Environment Isolation&lt;/strong&gt; - Separate process context for each execution&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  💬 Why This Matters
&lt;/h3&gt;

&lt;p&gt;GitHub Copilot CLI is revolutionary for developer productivity, but it lives in the terminal. By creating Copilot Velocity Hub, I've demonstrated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copilot CLI can power sophisticated, production-ready web applications&lt;/li&gt;
&lt;li&gt;AI tools don't need to be complex - they can be beautifully simple&lt;/li&gt;
&lt;li&gt;Teams can standardise on AI-assisted workflows without tribal knowledge&lt;/li&gt;
&lt;li&gt;Security and usability are complementary, not opposing forces&lt;/li&gt;
&lt;li&gt;Modern web frameworks make integration straightforward and safe&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📊 Project Highlights
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Build Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&amp;lt;3 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Dev Startup&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~5 seconds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Page Load&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&amp;lt;500ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;TypeScript Coverage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;100% of source code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Security Vulnerabilities&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0 (audited)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pre-built Plugins&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4 powerful, extensible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Test Coverage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Comprehensive Jest suite&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Documentation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Detailed guides&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Code Quality&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Production-ready&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🚀 Getting More from Copilot CLI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Custom Plugins
&lt;/h3&gt;

&lt;p&gt;Want to extend Copilot Velocity Hub? Adding new plugins is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;customPlugin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;api-docs-generator&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Generate API Docs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Auto-generate OpenAPI/Swagger documentation&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;icon&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;📚&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;copilotPrompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Generate comprehensive API documentation with examples...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Integration Ideas
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD Pipeline&lt;/strong&gt; - Trigger analysis on every commit&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Review Bot&lt;/strong&gt; - Use plugin outputs in PR comments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slack Bot&lt;/strong&gt; - Execute plugins and post results to channels&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code Extension&lt;/strong&gt; - Embed directly in the editor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Desktop App&lt;/strong&gt; - Package with Electron or Tauri&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🙏 Credit &amp;amp; Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/github/cli" rel="noopener noreferrer"&gt;GitHub Copilot CLI&lt;/a&gt;&lt;/strong&gt; - The powerful backbone&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://nextjs.org" rel="noopener noreferrer"&gt;Next.js&lt;/a&gt;&lt;/strong&gt; - Modern web framework&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://react.dev" rel="noopener noreferrer"&gt;React&lt;/a&gt;&lt;/strong&gt; - Latest React features&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://tailwindcss.com" rel="noopener noreferrer"&gt;TailwindCSS&lt;/a&gt;&lt;/strong&gt; - Beautiful styling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://www.typescriptlang.org" rel="noopener noreferrer"&gt;TypeScript&lt;/a&gt;&lt;/strong&gt; - Type safety&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/longphanquangminh/copilot-plugin-marketplace" rel="noopener noreferrer"&gt;Based On&lt;/a&gt;&lt;/strong&gt; - This project&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://jestjs.io" rel="noopener noreferrer"&gt;Jest&lt;/a&gt;&lt;/strong&gt; - Testing framework&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🎉 Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Copilot Velocity Hub&lt;/strong&gt; proves that GitHub Copilot CLI's power extends far beyond the terminal. By combining security, usability, and AI capabilities, we can build tools that genuinely accelerate development workflows.&lt;/p&gt;

&lt;p&gt;Whether you're:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A developer wanting to learn Next.js&lt;/li&gt;
&lt;li&gt;A team looking to standardise AI-assisted workflows&lt;/li&gt;
&lt;li&gt;An organisation seeking to improve code quality consistently&lt;/li&gt;
&lt;li&gt;A learner exploring AI integration patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...this project provides a solid, production-ready foundation.&lt;/p&gt;

&lt;p&gt;The code is clean, well-documented, secure, and ready to extend. Contributions are welcome!&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 Start Your Journey
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Ready to boost your development velocity?&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/ghostinthewires/copilot-velocity-hub.git
&lt;span class="nb"&gt;cd &lt;/span&gt;copilot-velocity-hub
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then visit &lt;strong&gt;&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/strong&gt; and experience the future of AI-assisted development.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have questions or ideas? Open an issue or submit a pull request on &lt;a href="https://github.com/ghostinthewires/copilot-velocity-hub.git" rel="noopener noreferrer"&gt;GitHub!&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Connect on &lt;a href="https://www.linkedin.com/in/andrew-u-404719240/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>githubcopilot</category>
      <category>ai</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>More than a decade of Azure: What I’d Do Again and What I Regret</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Mon, 02 Mar 2026 15:15:29 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/more-than-a-decade-of-azure-what-id-do-again-and-what-i-regret-1d99</link>
      <guid>https://dev.to/ghostinthewire5/more-than-a-decade-of-azure-what-id-do-again-and-what-i-regret-1d99</guid>
      <description>&lt;p&gt;Reflecting on two decades of leading technology transformations. Global FinTech SaaS platforms processing millions of transactions. AI-driven automation that reclaimed hundreds of person-days. High-pressure private equity exits. Kubernetes clusters that bridged the gap from legacy to cloud-native. Terraform state files that moved the needle for global consultancies. &lt;/p&gt;

&lt;p&gt;I’ve seen the "cloud-native" definition shift multiple times. This is the post I wish someone had written for me when I started my journey through the ranks. moving through roles as a DevOps Engineer, an Engineering Lead and now Senior DevOps Manager, I’ve navigated the Azure ecosystem's growth from a "Windows-first" cloud to a powerhouse of Linux, Kubernetes, and AI.&lt;/p&gt;

&lt;p&gt;Every decision below is something I’ve either shipped to production and would do again, or something I’ve spent months deconstructing to pay down technical debt.&lt;/p&gt;

&lt;p&gt;If I were starting a greenfield project today, or stepping into a new role, here is how I would weigh the decisions that actually move the needle on business outcomes.&lt;/p&gt;

&lt;p&gt;No theoretical fence-sitting. No playing it safe with vendor-neutrality. This is the raw reality of scaling Azure under pressure.&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure
&lt;/h1&gt;

&lt;p&gt;🟩 Endorse&lt;/p&gt;

&lt;h3&gt;
  
  
  Picking Azure over AWS
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Azure Advantages (Why I’d do it again):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Container Gold Mine:&lt;/strong&gt; AWS AppRunner is a non-starter. If you want true serverless containers that actually scale to zero, Azure Container Apps (ACA) is the industry leader. Similarly, AKS feels like a cohesive product, whereas EKS often feels like a collection of parts you have to bolt together yourself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identity that Works:&lt;/strong&gt; Microsoft Entra ID (formerly Azure AD) is the undisputed heavyweight champion. Compared to the headache of AWS Cognito, Entra ID is a robust, enterprise-grade solution that I’ve relied on to secure highly regulated, global platforms.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Developer Ecosystem:&lt;/strong&gt; Between GitHub Actions and Azure DevOps, the CI/CD story is just tighter. When you’re pushing for on-time delivery, having your repos, boards, and pipelines in one ecosystem is a massive force multiplier.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The AWS Envy (The "Regrets" and Hurdles):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Networking Flexibility:&lt;/strong&gt; I’ll be honest, Route 53 is easier to wield than Azure DNS. AWS handles global routing and health checks with a level of simplicity that Azure's Front Door and Traffic Manager can sometimes overcomplicate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tooling &amp;amp; Local Dev:&lt;/strong&gt; Azure still lacks a true "LocalStack" equivalent. Being able to emulate the entire cloud locally is a huge win for AWS developers. In the Azure world, we rely more on emulators like Azurite, but it’s not quite the same "cloud-in-a-box" experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Velocity:&lt;/strong&gt; There’s no sugar-coating it, Azure Resource Manager (ARM) and even some Bicep deployments can be painfully slow. Watching a gateway or a firewall provision for 20 minutes is a "coffee break" I’d rather not have to take.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Talent War:&lt;/strong&gt; It’s still a reality that finding AWS-specialised engineers is easier on the open market. Building the high-performing Azure teams I have led in the past required more internal "up-skilling" and intentional training programs because the talent pool is shallower.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security Group Fatigue:&lt;/strong&gt; While Azure’s NSGs and ASGs are logical, tracing a routing issue through multiple VNet peers and UDRs (User Defined Routes) can become "complicated AF" compared to the more fluid networking model AWS sometimes offers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  The Bottom Line:
&lt;/h4&gt;

&lt;p&gt;I’ve shipped on both. AWS might have the "easy" button for networking and local dev, but for enterprise-scale automation, serverless maturity, and AI-driven efficiency, I’m putting my money on the Azure stack every single time.&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure Container Apps (ACA) vs AKS
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;🟩 What I Endorse:&lt;/strong&gt; Use Azure Container Apps (ACA) for the majority of microservices. In previous years, we went straight to AKS (Azure Kubernetes Service). While I’ve successfully utilised Kubernetes to optimize cloud operations for global organisations, the cognitive load on the team is significant. ACA provides the "Serverless Container" experience that allows teams to scale to zero and focus on the code, not the control plane. Having managed 24/7 operational environments, I’ve learned that the best infrastructure is the one you don't have to babysit. ACA gives you the "Serverless Container" benefit without the "Kubernetes Tax."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🟧 What I regret:&lt;/strong&gt; Using AKS (Azure Kubernetes Service) for small, product-focused teams. I regret the times I built a full K8s orchestration layer for a simple API. Unless you have the resilient team structures in place to manage the complexity of ingress controllers, service meshes, and node pools, it’s an unnecessary tax on productivity.&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure SQL vs Cosmos DB
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;🟩 What I Endorse:&lt;/strong&gt; Default to Azure SQL (Hyperscale or Managed Instance) for 80% of enterprise workloads. Relational data is the backbone of sectors like Insurance and Finance. I’ve found that the reliability and familiar tooling of SQL Server, scaled via Azure’s elastic tiers, solve most business problems without the complexity of distributed NoSQL. Azure SQL is battle-tested. It scales, it’s familiar, and it’s consistently reliable for the heavy lifting I've overseen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🟧 What I regret:&lt;/strong&gt; Using Cosmos DB as a "catch-all" database.&lt;br&gt;
Cosmos DB is incredible for global scale and low-latency NoSQL needs. However, I regret using it in scenarios where the data schema was actually quite rigid. If you don't need global distribution or a flexible schema, Cosmos DB is an expensive way to realize you should have just used a JOIN. Cosmos DB is an engineering marvel, but the cost of mismanagement (improper partitioning) is high. I now reserve Cosmos for specific, high-scale event stores rather than general-purpose storage.&lt;/p&gt;

&lt;h1&gt;
  
  
  Terraform vs Bicep
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;🟩 What I Endorse:&lt;/strong&gt; Stick with Terraform for hybrid and multi-cloud environments. My experience reflects a heavy reliance on Terraform and ARM templates because consistency is king. Its ability to manage state and provide a platform-agnostic language is vital when coordinating hybrid cloud infrastructure. In lots of previous roles I championed the migration of applications to Azure; Terraform provided the common language that allowed us to bridge the gap between legacy on-prem and cloud enablement. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🟧 What I regret:&lt;/strong&gt; Waiting too long to adopt Azure Bicep for Azure-only projects. If you are 100% in the Azure ecosystem, the "Day 0" support for new features in Bicep is superior to waiting for provider updates in Terraform. When I first started working with the cloud it was in AWS, and the regret was often not using CDK. In Azure, the regret is sticking with verbose, nested ARM Templates for too long. For Azure-native shops, Bicep offers a much cleaner abstraction. While I still champion Terraform for the "big picture," I regret not moving our Azure-specific modules to Bicep sooner to lower the cognitive load for the team. However, for the strategic, global portfolios I manage now, Terraform remains the gold standard for state management and modularity.&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure Functions vs Logic Apps
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;🟩 What I Endorse:&lt;/strong&gt; A "Functions-first" approach for event-driven logic. Utilising C# and Python Functions has allowed my teams to focus on product development rather than infrastructure patching. It’s the ultimate tool for reducing technical debt while maintaining a high velocity. Functions Triggers have "reactivity" baked into the DNA. You aren't writing "glue code" just to move an event to a HTTP call, the integration with Service Bus and Blob Storage is seamless and saves weeks of development time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🟧 What I regret:&lt;/strong&gt; Building complex business logic directly into Logic Apps. While Logic Apps are great for "no-code" glue between SaaS products, they are a nightmare for version control, unit testing, and complex CI/CD pipelines. I regret not moving that logic into Azure Functions earlier, where we could apply proper software engineering rigor.&lt;/p&gt;

&lt;h1&gt;
  
  
  Application Insights &amp;amp; Log Analytics vs Tool Sprawl
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;🟩 What I Endorse:&lt;/strong&gt; Investing heavily in Application Insights from day zero. You cannot manage a 24/7 operational environment without a unified view. Also, shifting the perception of IT within an organisation requires data. By leveraging Azure Monitor and App Insights, I’ve been able to show stakeholders real-time dashboards that link system health to business value. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🟧 What I regret:&lt;/strong&gt; Relying on fragmented third-party monitoring tools.&lt;br&gt;
In high-pressure 24/7 environments, "tool sprawl" is the enemy. I regret the times we had separate logs for the app, the network, and the database that didn't talk to each other. Consolidating into a unified Log Analytics workspace is nearly always the better move.&lt;/p&gt;

&lt;h1&gt;
  
  
  Azure Kubernetes Service Node Auto-Provisioning (NAP)
&lt;/h1&gt;

&lt;p&gt;🟩 Endorse&lt;/p&gt;

&lt;p&gt;If you’re running AKS without Azure Kubernetes Service Node Auto-Provisioning (NAP), which manages the open source project Karpenter, you’re essentially burning money.&lt;/p&gt;

&lt;p&gt;In the old world, the Cluster Autoscaler was the bottleneck: slow, rigid, and constantly fighting with manual Node Pool configurations. In the modern Azure ecosystem, Node Auto-Provisioning (based on the Karpenter model) is the game-changer. It’s fast, intelligent, and provisions the exact VM sizes your pods actually need in real-time.&lt;/p&gt;

&lt;p&gt;Drawing from my experience optimising cloud operations for global SaaS platforms, we’ve seen 30-40% cost reductions on compute after moving away from static node pools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spot Instance Handling:&lt;/strong&gt; It actually works without the "eviction anxiety" of the past.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;True Consolidation:&lt;/strong&gt; It proactively scales down and moves workloads to ensure you aren't paying for "ghost" capacity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real Bin-Packing:&lt;/strong&gt; No more half-empty D-Series VMs sitting idle; the scheduler finally treats your compute like a fluid resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The learning curve:&lt;/strong&gt; Getting your head around NodePool objects, Provisioning configuration, and taints/tolerations, is real. But for any strategic portfolio in 2026, this isn't an "add-on", it’s a non-negotiable requirement for a resilient, cost-efficient infrastructure.&lt;/p&gt;

&lt;h1&gt;
  
  
  KEDA (Kubernetes Event-Driven Autoscaling)
&lt;/h1&gt;

&lt;p&gt;🟩 Endorse for Azure event-driven workloads&lt;/p&gt;

&lt;p&gt;In my time directing technology transformations, I’ve seen too many teams rely on standard HPA (Horizontal Pod Autoscaler) scaling on CPU/Memory. That’s a blunt instrument. KEDA is the precision tool. While HPA watches your hardware, KEDA watches your business logic: Azure Service Bus queue depth, Event Hub lag, or Storage Queue message counts.&lt;/p&gt;

&lt;p&gt;If you’re running workers that process insurance claims or property management updates, KEDA is the answer. We’ve used it to cut costs significantly on batch processing workloads that used to sit idle 24/7 "just in case." Now, they scale to zero when the queue is empty and ramp up instantly based on the actual backlog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it shines in the Azure Stack:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Service Bus / Storage Queues:&lt;/strong&gt; Scaling based on &lt;code&gt;activeMessageCount&lt;/code&gt;. This is the "gold standard" for async processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Event Hubs:&lt;/strong&gt; Scaling based on partition lag to ensure you aren't falling behind on high-throughput data streams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scheduled Jobs:&lt;/strong&gt; Using cron-based scaling for predictable peak periods (e.g., end-of-month reporting), which often outperforms standard Kubernetes CronJobs for long-running processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Monitor/Log Analytics:&lt;/strong&gt; Scaling on custom KQL (Kusto) metrics that your application exposes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where it doesn’t:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Request-based scaling:&lt;/strong&gt; If you're scaling a public-facing API based on traffic, stick with HPA + Azure Application Gateway/Ingress metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latency-sensitive "Instant-on" workloads:&lt;/strong&gt; If your application cannot handle a cold start from zero, keep a minimum replica count.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Strategy:&lt;/strong&gt; Use KEDA (ScaledObjects) for your async workers and background processors, and HPA for your synchronous REST APIs. They coexist perfectly in the same AKS cluster, one handles the reactive "pull" work, the other handles the proactive "push" traffic.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Actual Lessons:
&lt;/h1&gt;

&lt;p&gt;Two decades in the trenches have distilled my architectural "religion" down to these hard-won truths. If you’re building on Azure in 2026, this is the blueprint.&lt;/p&gt;

&lt;h1&gt;
  
  
  Non-negotiable:
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;AKS + Node Auto-Provisioning (NAP):&lt;/strong&gt; If you aren't using the Karpenter-based model, you’re burning money on static node pools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Azure Container Apps (ACA) for Microservices:&lt;/strong&gt; Stop paying the "Kubernetes Tax" for simple APIs. Focus on the code, not the control plane.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entra ID (Azure AD):&lt;/strong&gt; The undisputed heavyweight champion of identity. If you aren't using it to secure your global platforms, you're creating a headache for your future self. Use it for everything. OIDC federation is the only way to kill long-lived CI credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform for the Big Picture:&lt;/strong&gt; Its ability to manage state and provide a platform-agnostic language is vital for coordinating hybrid cloud infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;KEDA for Async Workers:&lt;/strong&gt; If it’s an event-driven consumer, scale on queue depth, not CPU. Precision scaling based on Azure Service Bus depth or Event Hub lag. Scale to zero or fail your budget.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Insights from Day Zero:&lt;/strong&gt; End-to-end observability is the only way to manage 24/7 operational environments and shift the perception of IT within an organisation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Avoid at all costs:
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Logic Apps for Complex Logic:&lt;/strong&gt; It’s a CI/CD and unit-testing nightmare. Keep the "glue" in Azure Functions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cosmos DB as a "Catch-all":&lt;/strong&gt; It’s an expensive way to store relational data. Unless you truly need global sub-10ms latency, stick to Azure SQL. Reserve it for global-scale NoSQL, not rigid schemas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AKS for Small Teams:&lt;/strong&gt; Don't build a full K8s orchestration layer for a simple API unless you have the platform team to babysit it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool Sprawl:&lt;/strong&gt; Fragmented third-party monitoring is the enemy of incident response. Stop paying for three different monitoring tools that don't talk to each other. Consolidate into a unified Log Analytics workspace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verbose ARM Templates:&lt;/strong&gt; Don't stay in the "nested ARM" world for Azure-native projects; the cognitive load isn't worth it.&lt;/p&gt;

&lt;h1&gt;
  
  
  Niche wins worth knowing:
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Innersourcing:&lt;/strong&gt; The most effective way to improve culture and reduce technical debt across global silos.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Meta-Lessons:
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Ship Business Outcomes, Not Infrastructure:&lt;/strong&gt; My most successful projects weren't because of a "perfect" cluster; they were because the infrastructure allowed the product to scale without the team burning out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Boring Technology is Resilient Technology:&lt;/strong&gt; The C# Azure Function that’s been running for three years without a restart is worth more than the experimental service mesh that requires a weekly post-mortem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automation is Your Legacy:&lt;/strong&gt; In a high-pressure environment automation isn't a luxury, it’s your only defense against chaos.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business Observability:&lt;/strong&gt; If your logs and metrics don't link system health to business value in 15 minutes, your architecture is too clever for its own good.&lt;/p&gt;




&lt;p&gt;Connect on &lt;a href="https://www.linkedin.com/in/andrew-u-404719240/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>terraform</category>
      <category>ai</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Introducing Our Engineering Progression Framework</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Tue, 28 Jun 2022 14:45:58 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/introducing-our-engineering-progression-framework-3bjo</link>
      <guid>https://dev.to/ghostinthewire5/introducing-our-engineering-progression-framework-3bjo</guid>
      <description>&lt;p&gt;At the beginning of 2022 we wrapped up the first version of our Engineering Progression Framework. Now we’ve been using it internally for a little while, we also wanted to share it more widely, to help others learn more about Engineering here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A progression framework is a communication tool that supports fairness&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before we jump into the details, here’s a quick overview of what we mean when we talk about a "progression framework".&lt;/p&gt;

&lt;p&gt;Quite simply, it’s a set of shared expectations that we use to explain what we expect of engineers at different levels of seniority. Each level is described in the framework with a description, plus an illustration of the type of behaviours, impact, and skills we think are reflective of someone at that level.&lt;/p&gt;

&lt;p&gt;However, importantly, it’s not an exhaustive checklist. We’ve intentionally focused on a core set of examples that we think can fairly apply to any engineer here, but they’re not intended as a finite list of everything a great engineer could do or be. People will almost certainly be doing important things that aren’t in the framework. There are many 'shapes' of engineers, and we’ll aim to celebrate people’s different strengths whilst also aiming for fairness and clarity through our core expectations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We’re pretty pleased with the result, but we’re not finished!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’re planning to keep making improvements as we change and grow, but for now you can take a peek at what we’re using below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.google.com/presentation/d/1F2QH0q-AVh93IOOW3B5f-_SF9BJ4muSk/edit?usp=sharing&amp;amp;ouid=104042777269955323247&amp;amp;rtpof=true&amp;amp;sd=true" rel="noopener noreferrer"&gt;Engineers&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;&lt;a href="https://docs.google.com/presentation/d/1ReWxXyiU0Vo0XCE13kORgTtVEKkZH4uq/edit?usp=sharing&amp;amp;ouid=104042777269955323247&amp;amp;rtpof=true&amp;amp;sd=true" rel="noopener noreferrer"&gt;Engineering Managers&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Historically in the engineering space the only way for engineers to progress was through stopping coding and moving into management. We believe these are fundamentally different sets of skills, and we want to make sure all our engineers have the opportunity to progress without changing career. That said, for many folks, moving into management and creating systems to help engineers do their best is where they find their future lies, so you can see we support a switch of framework once engineers reach a certain level. We’re also planning to support folks who want to "swing on the engineer/manager pendulum", and switch back to engineering after a couple of years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What’s next?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This framework will naturally evolve as we apply it – it will never be finished or perfect, and it is to be considered a living document, so our teams and our managers will help steer this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; 😸&lt;/p&gt;

&lt;p&gt;I regularly post useful content related to Azure, DevOps and Engineering on Twitter. You should consider following me on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>devops</category>
      <category>devrel</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>The Engineering Handbook</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Fri, 17 Jun 2022 08:48:11 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/the-engineering-handbook-2che</link>
      <guid>https://dev.to/ghostinthewire5/the-engineering-handbook-2che</guid>
      <description>&lt;p&gt;We’ve been working on codifying some of our working practices here into an Engineering Handbook.&lt;/p&gt;

&lt;p&gt;I've found that the key ingredients to empower &lt;strong&gt;autonomous teams&lt;/strong&gt; are &lt;strong&gt;ownership&lt;/strong&gt;, &lt;strong&gt;trust&lt;/strong&gt;, and a &lt;strong&gt;common language&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We want every team to own our customers’ success. As opposed to a static command and control model, we start from a position of trust in every team and continually work together to verify the &lt;strong&gt;quality&lt;/strong&gt;, &lt;strong&gt;security&lt;/strong&gt;, and &lt;strong&gt;reliability&lt;/strong&gt; of a product. &lt;/p&gt;

&lt;p&gt;We then scale this practice by ensuring our engineers have a common language. This includes a set of principles, rituals, and expectations to which they can align. The outcome is a loosely coupled and highly aligned system that empowers our teams to move fast, make decisions, and come up with innovative solutions that propel us forward.&lt;/p&gt;

&lt;p&gt;Take a look at our live &lt;a href="https://engineering.firstport.co.uk/" rel="noopener noreferrer"&gt;Engineering Handbook&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We want to share the way we work with everyone, to help improve teams across industries. Hopefully, our learnings will help you build autonomous teams and ingrain ownership in your own organisation.&lt;/p&gt;

&lt;p&gt;If you would like to build your own Engineering Handbook, feel free to fork my &lt;a href="https://github.com/ghostinthewires/Engineering-Handbook" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; 😸&lt;/p&gt;

&lt;p&gt;I regularly post useful content related to Azure, DevOps and Engineering on Twitter. You should consider following me on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>tutorial</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Tech Radar for visualising Technology Strategy. What is it and how to build it?</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Wed, 15 Jun 2022 11:35:18 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/tech-radar-for-visualising-technology-strategy-what-is-it-and-how-to-build-it-gb8</link>
      <guid>https://dev.to/ghostinthewire5/tech-radar-for-visualising-technology-strategy-what-is-it-and-how-to-build-it-gb8</guid>
      <description>&lt;p&gt;In this blog post I would like to share my knowledge about a very useful tool that helps you to visualise your Technology Strategy — &lt;strong&gt;Tech Radar&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Also, at the end of this blog I will share the live Tech Radar from my organisation and a link to my GitHub Repo so you can build your own!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxzuu66ud7bkxov3eyya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxzuu66ud7bkxov3eyya.png" alt="Tech Radar" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Tech Radar?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tech Radar uses two categorising elements: the quadrants and the rings. The quadrants represent different kinds of blips. The rings indicate what stage is in an adoption lifecycle.&lt;/p&gt;

&lt;p&gt;The quadrants are a categorisation of the type of blips:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Languages and Frameworks.&lt;/strong&gt; As it suggests, things such as C#, Java etc&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tools.&lt;/strong&gt; These could be software development tools, such as code scanners, Terraform etc.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Platforms &amp;amp; Infrastructure.&lt;/strong&gt; Things that we build software on top of such as Azure, Salesforce etc&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Techniques.&lt;/strong&gt; These include elements of a software development process, such as Continuous Delivery ; and ways of structuring software, such as Microservices.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Tech Radar also has four rings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Adopt&lt;/strong&gt; ring is for Technologies you have high confidence in to serve your purpose. Technologies with a usage culture in your production environment, low risk and recommended to be widely used.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Trial&lt;/strong&gt; ring is for Technologies that you have seen work with success in project work to solve a real problem; first serious usage experience that confirm benefits and can uncover limitations. Trial technologies are slightly more risky; some engineers in your organisation may have walked this path and will share knowledge and experiences.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Assess&lt;/strong&gt; ring is for Technologies that are promising and have clear potential value-add for you; technologies worth investing some research and prototyping efforts in to see if it has impact. Assess Technologies have higher risks; they are often brand new and highly unproven in your organisation. You will find some engineers that have knowledge in the technology and promote it, you may even find teams that have started a prototyping effort.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;strong&gt;Hold&lt;/strong&gt; ring is for Technologies not recommended to be used for new projects. Technologies that we think are not (yet) worth to (further) invest in. Hold technologies should not be used for new projects, but usually can be continued for existing projects.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why do you need a Tech Radar?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Firstly, creating the Tech Radar is a very valuable exercise. It helps you to do an audit of your portfolio. Finding the potential risks and blind spots. &lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It gives more transparency into your Technology Department. The Tech Radar helps your teams and architects choose the best technologies for future projects. It shows the current state of your technology landscape and teams can choose the best tools and technology that are already adopted in your company.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you keep your Tech Radar public (like we do) it can be very beneficial for your recruitment and Engineering brand. On one hand potential candidates can see the technology stack of the company. On the other hand you will be able to understand whether the knowledge and experience of the candidate are suitable for your environment.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to create and update?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Tech Radar is a living tool. And you should keep this tool up to date because this is your current technology landscape and the future target.&lt;/p&gt;

&lt;p&gt;Depending on the size of your organisation, updating the radar can be done by the community of the most active engineers (like we do), team leaders, architects or a special department.&lt;/p&gt;

&lt;p&gt;It is advisable to check and update the radar at least once every 6 months. Check legacy technologies, whether it’s time to change them to new ones that have passed the adaptation period. And check current radar for your target technology strategy.&lt;/p&gt;

&lt;p&gt;When you introduce new technologies or tools in your teams — check with your Tech Radar. Perhaps the new technology is already being tested in another team and you will save time &amp;amp; effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Our live Tech Radar!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So as promised at this start of this blog, here is our live &lt;a href="https://techradar.firstport.co.uk/" rel="noopener noreferrer"&gt;Tech Radar&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbft4ro57xissg95neobs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbft4ro57xissg95neobs.png" alt="FirstPort Tech Radar" width="800" height="809"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using our Tech Radar you can trace the usage and adaptation of many technologies and languages in our organisation.&lt;/p&gt;

&lt;p&gt;For example, you can see that GitHub Actions are currently running in Adopt mode — a CI/CD service that is used to build and deploy services, to run automated tests, deploy IaC etc.&lt;/p&gt;

&lt;p&gt;Also, here is the link to my &lt;a href="https://github.com/ghostinthewires/Tech-Radar" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt; which you can fork to build your very own Tech Radar!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; 😸&lt;/p&gt;

&lt;p&gt;I regularly post useful content related to Azure, DevOps and Engineering on Twitter. You should consider following me on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>tutorial</category>
      <category>devops</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Would I replace Terraform with Bicep ? 💪🏽</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Thu, 10 Mar 2022 14:21:25 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/would-i-replace-terraform-with-bicep--5gbj</link>
      <guid>https://dev.to/ghostinthewire5/would-i-replace-terraform-with-bicep--5gbj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ns91qt3swm0unj835cn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ns91qt3swm0unj835cn.png" alt="Azure Bicep" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a long time I have been a huge fan of &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; by Hashicorp for deploying my Azure cloud services. This is mainly due to finding &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview" rel="noopener noreferrer"&gt;ARM templates&lt;/a&gt; to be too verbose and cumbersome to work with - Microsoft’s response to these complaints is &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/overview?tabs=bicep" rel="noopener noreferrer"&gt;Bicep&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This week I decided to take a look at what all the fuss is about &amp;amp; see if I might replace Terraform with Bicep.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Bicep?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bicep is a declarative language that is classified as a domain-specific language (DSL) for deploying Azure resources. &lt;/p&gt;

&lt;p&gt;The goal of this language is to make it easier to write Infrastructure as Code (IaC) targeting Azure Resource Manager (ARM) using a syntax that’s more friendly than the JSON syntax of Azure ARM Templates.&lt;/p&gt;

&lt;p&gt;Bicep works as an abstraction layer built on top of ARM Templates. Anything that can be done with Azure ARM Templates can be done with Bicep as it provides a "transparent abstraction" over ARM (Azure Resource Manager). With this abstraction, all the types, apiVersions, and properties valid within ARM Templates are also valid with Bicep.&lt;/p&gt;

&lt;p&gt;Bicep is a compiled / transpiled language. This means that the Bicep code is converted into ARM Template code. Then, the resulting ARM Template code is used to deploy the Azure resources. This transpiling enables Bicep to use it’s own syntax and compiler for authoring Bicep files that compile down to Azure Resource Manager (ARM) JSON as a sort of intermediate language (IL).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh4l7ahp3vgeqyrlt5i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyh4l7ahp3vgeqyrlt5i0.png" alt="Bicep language compilation flow" width="585" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The way that Bicep is transpiled into ARM JSON is similar to how there are many different languages that can be written in, then transpiled into JavaScript that can be run within the web browser. One popular example of this type of transpiled language is TypeScript. A transpiled language offers benefits of adding an abstraction layer to make it easier and / or more feature full to write code that then gets compiled down to IL code that gets executed. This is also similar to how C# and VB.NET code compile down to MSIL in .NET code.&lt;/p&gt;

&lt;p&gt;In the development world, it’s common to encounter the use of transpiled languages. It’s also common in the DevOps world where YAML and JSON are converted between one or the other. Bicep offers some similarity in how it’s transpiled into ARM JSON. This enables you to use an alternative syntax and feature set for writing declarative Infrastructure as Code than the often-cumbersome ARM JSON syntax.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bicep Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Support for all resource types and API versions.&lt;/li&gt;
&lt;li&gt;Better authoring experience using editors such as VS Code (you will get validation, type-safety, intellisense).&lt;/li&gt;
&lt;li&gt;Modularity can be achieved using &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/modules" rel="noopener noreferrer"&gt;modules&lt;/a&gt;. You can have modules representing an entire environment or a set of shared resources and use them anywhere in a Bicep file.&lt;/li&gt;
&lt;li&gt;Integration with Azure services such as Azure Policy, Templates specs, and Blueprints.&lt;/li&gt;
&lt;li&gt;No need to store a state file or keep any state. You can even use the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-what-if?tabs=azure-powershell" rel="noopener noreferrer"&gt;what-if operation&lt;/a&gt; to preview your changes before deploying them.&lt;/li&gt;
&lt;li&gt;Bicep is open source with a strong community supporting it. All the binaries for the different supported operating systems can be downloaded from the official &lt;a href="https://github.com/Azure/bicep/releases" rel="noopener noreferrer"&gt;releases page&lt;/a&gt; of the Bicep open source project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bicep pre-requisites&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The tooling is pretty much the same as for ARM templates. That means that you need the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Either Azure PowerShell or Azure CLI. (I’ll use Azure CLI in this post, as I find it much more logical)&lt;/li&gt;
&lt;li&gt;The Bicep CLI (more on this in a second)&lt;/li&gt;
&lt;li&gt;Some form of text editor. I suggest VS Code as it can provide some pretty awesome help when working with Bicep templates&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Optional&lt;/em&gt;: The VS Code Bicep extension. This will give you superpowers when working with Bicep in VS Code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: I’m going to assume that you have the Azure CLI installed. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bicep CLI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To be able to work with Bicep files instead of ARM templates, you need the Bicep CLI. This is the part of the tool chain that is responsible for transpiling Bicep files to and from ARM templates. Yes…to AND from! More on that later!&lt;/p&gt;

&lt;p&gt;The Bicep CLI is installed by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az bicep install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or…if you are using Azure CLI version 2.20.0 or above, you can just ignore that step, as the Bicep CLI will be automatically installed when you run a command that needs it. So, in most cases, you don’t need to do anything to get Bicep file support on your machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you are on an earlier version of the Azure CLI, I would recommend updating that, instead of manually installing the Bicep CLI.&lt;/p&gt;

&lt;p&gt;To verify your Azure CLI version, you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az version

{
  "azure-cli": "2.34.1",
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And to verify the installed version of the Bicep CLI you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az bicep version

Bicep CLI version 0.4.1272
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you try running this command without having the Bicep CLI installed, you get an error message that says&lt;/p&gt;

&lt;p&gt;Bicep CLI not found. Install it now by running "az bicep install".&lt;/p&gt;

&lt;p&gt;And, as the error message says, you fix that by running &lt;em&gt;az bicep install&lt;/em&gt;, or any Bicep related command that will automatically install it.&lt;/p&gt;

&lt;p&gt;If you have an outdated Bicep CLI version, and want to update it to the latest and greatest, you just need to run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;az bicep upgrade
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have the Bicep CLI installed (or just want to ignore it and have the Azure CLI install it when needed), you need a text editor of some kind to edit the Bicep files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VS Code and the Bicep extension&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I would highly recommend using VS Code when working with Bicep files. The reason for this, besides it being light-weight, cross platform, fast and generally quite awesome, is the ability to install the Bicep extension that gives you extra help when working with Bicep files.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep" rel="noopener noreferrer"&gt;Bicep extension&lt;/a&gt; is available from the marketplace. Just search for bicep and you will find it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j53hlmx2rfpk070mip0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j53hlmx2rfpk070mip0.png" alt="VS Code Extension" width="538" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s actually all there is to it from a tooling point of view. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bicep Syntax&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every Bicep resource will have the below syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;symbolic-name&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;resource-type&amp;gt;@&amp;lt;api-version&amp;gt;` = {  
  //properties
  name: '&lt;/span&gt;&lt;span class="nx"&gt;ghostinthewiresstorage&lt;/span&gt;&lt;span class="s1"&gt;'
  location: '&lt;/span&gt;&lt;span class="nx"&gt;westeurope&lt;/span&gt;&lt;span class="s1"&gt;'  
  properties: {
    //...sub properties
  }
}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;resource&lt;/strong&gt;: is a reserved keyword.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;symbolic name&lt;/strong&gt;: is an identifier within the Bicep file which can be used to reference this resource elsewhere.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;resource-type&lt;/strong&gt;: is the type of the resource you're defining, e.g. Microsoft.Storage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;api-version&lt;/strong&gt;: each resource provider publishes its own API version which defines which version of the Azure Resource Manager REST API should be used to deploy this resource.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;properties&lt;/strong&gt;: these are the resource specific properties. For example every resource has a &lt;strong&gt;name&lt;/strong&gt; and &lt;strong&gt;location&lt;/strong&gt;. In addition some have sub properties which you can pass on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Parameters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When we talk about infrastructure as a code and reusability of our templates, we definitely end up using parameters to customise our resources. Be its name, sku, username or password, we will need to change these per environment or application.&lt;/p&gt;

&lt;p&gt;In a Bicep file you can define the parameters that need to be passed to it when deploying resources. You can put validation on the parameter value, provide default value, and limit it to allowed values. The format of a parameter will be such as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;parameter-name&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;parameter-type&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;parameter-value&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;param&lt;/strong&gt;: is a reserved keyword.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;parameter-name&lt;/strong&gt;: is the name of the parameter.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;parameter-type&lt;/strong&gt;: is the type of the parameter such as string, object, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;parameter-value&lt;/strong&gt;: is the value of the parameter you're passing in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's review two examples to get a better understanding of the structure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;minLength&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;maxLength&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;21&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;storageName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example you're limiting the &lt;strong&gt;storageName&lt;/strong&gt; parameter's value length to be between 6 and 21 characters. Or:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="n"&gt;allowed&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'Standard_LRS'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'Standard_GRS'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'Standard_RAGRS'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'Standard_ZRS'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'Premium_LRS'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'Premium_ZRS'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'Standard_GZRS'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'Standard_RAGZRS'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;storageRedundancy&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Standard_GRS'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example you're specifying the allowed values for the &lt;strong&gt;storageRedundancy&lt;/strong&gt; parameter and also provide the default value if nothing is provided during the deployment.&lt;/p&gt;

&lt;p&gt;With ARM templates you had to use a separate file to pass the parameters during the deployments usually with a name ending in .parameters.json. In Bicep you need to use the same JSON file to pass the parameters in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$schema&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"contentVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s2"&gt;"parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"storageName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"myuniquestoragename"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"storageRedundancy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Standard_GZRS"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Variables&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Similar to parameters, variables play an important part in our templates, especially when it comes to naming conventions. These can store complex expressions to keep our templates clean and their maintenance simple. In Bicep variables are defined using the var keyword:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="kr"&gt;var&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;variable-name&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;strong&gt;variable-name&lt;/strong&gt; is the name of your variable. For example in our previous Bicep file we could have used a variable for our storage name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="kr"&gt;var&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;storageAccName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'sa${uniqueString(resourceGroup().id)}'&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;stg&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Microsoft.Storage/storageAccounts@2019-06-01'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storageAccountName&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;//...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we need a unique name for our storage account the uniqueString function is used (Don't worry about that for now). The point is that we can create variables and use them in our template with ease.&lt;/p&gt;

&lt;p&gt;There are multiple variable types you can use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;String&lt;/li&gt;
&lt;li&gt;Boolean&lt;/li&gt;
&lt;li&gt;Numeric&lt;/li&gt;
&lt;li&gt;Object&lt;/li&gt;
&lt;li&gt;Array&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Expressions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Expressions are used in our templates for variety of reasons, from getting the current location of the resource group to subscription id or the current datetime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Functions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The good thing is that ANY valid &lt;a href="https://docs.microsoft.com/en-gb/azure/azure-resource-manager/templates/template-functions" rel="noopener noreferrer"&gt;ARM template function&lt;/a&gt; is also a valid Bicep function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;currentTime&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;utcNow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="kr"&gt;var&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;location&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;resourceGroup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;location&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="kr"&gt;var&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;makeCapital&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;toUpper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'all lowercase'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ARM templates have an output section where you could send information out of your pipeline to be accessed within other deployments or subsequent tasks. In Bicep you have the same concept via the output keyword.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;stg&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Microsoft.Storage/storageAccounts@2019-06-01'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;//...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storageId&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stg.id&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Loops&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In ARM templates if you wanted to deploy a resource multiple times you could leverage the copy operator to add a resource n times based on the loop count. In Bicep you have the for operator at your disposal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;foo&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'my.provider/type@2021-03-01'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ITERATOR_NAME&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ARRAY&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where ITERATOR_NAME is a new symbol that's only available inside your resource declaration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;containerNames&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;array&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'images'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'videos'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="s1"&gt;'pdf'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;blob&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Microsoft.Storage/storageAccounts/blobServices/containers@2019-06-01'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;containerNames&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'${stg.name}/default/${name}'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;//...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="err"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This snippet creates three containers within the storage account in a loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Existing keyword&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you want to deploy a resource which is depending on an existing resource you can leverage the existing keyword.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;stg&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Microsoft.Storage/storageAccounts@2019-06-01'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;existing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storageAccountName&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You won't need the other properties since the resource already exists. You need enough information to be able to identify the resource. Now that you have this reference, you can use it in other parts of your deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modules&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In ARM templates you had the concept of linked templates when it came to reuse a template in other deployments. In Bicep you have &lt;strong&gt;modules&lt;/strong&gt;. You can define a resource in a module and reuse that module in other Bicep files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── main.bicep
└── storage.bicep
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our &lt;strong&gt;storage&lt;/strong&gt; file you will define the resource, its parameters, variables, outputs, etc:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;//storage.bicep&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nx"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storageAccountName&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="kr"&gt;var&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;storageSku&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Standard_LRS'&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Microsoft.Storage/storageAccounts@2019-06-01'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storageAccountName&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;location:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;resourceGroup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;location&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;kind:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Storage'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;sku:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storageSku&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And in the &lt;strong&gt;main&lt;/strong&gt; file you will reuse the storage account as a module using the &lt;strong&gt;module&lt;/strong&gt; keyword:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;//main.bicep&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'./storage.bicep'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'storageDeploy'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;params:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;storageAccountName:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;YOURUNIQUESTORAGENAME&amp;gt;'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="n"&gt;output&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;storageName&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;array&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;stg.outputs.containerProps&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You only need to pass the required properties which in case of our storage account is the name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The &lt;em&gt;any&lt;/em&gt; keyword&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There might be some cases where Bicep throws a false positive when it comes to errors or warnings. This might happen based on different situations such as the API not having the correct type definition. You can use the any keyword to get around these situations when defining resources which have incorrect types assigned. One of examples is the container instances CPU and Memory properties which expect an int, but in fact they are number since you can pass non-integer values such as 0.5.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;wpAci&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'microsoft.containerInstance/containerGroups@2019-12-01'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'wordpress-containerinstance'&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;location:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;properties:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;containers:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'wordpress'&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="n"&gt;properties:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="o"&gt;...&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;resources:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="n"&gt;requests:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="n"&gt;cpu:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'0.5'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="n"&gt;memoryInGB:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'0.7'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using any and passing the value you can get around the possible errors which might be raised during the build or the validation stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Create Bicep Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As previously mentioned, developers can use Microsoft-provided Visual Studio Code extensions for the Bicep language to enhance the functionality that Bicep brings to the table. These extensions, specifically, provide language support and resource autocompletion to assist with creating and validating Bicep files, reducing coding errors, and making the writing of code more efficient.&lt;/p&gt;

&lt;p&gt;One of the nice things with Bicep, compared to ARM templates, is the fact that you don’t need to add any form of "base structure" to make it a valid Bicep file. ARM templates require us to create a JSON root element. In Bicep, as long as the file extension is .bicep, it is considered a Bicep file.&lt;/p&gt;

&lt;p&gt;Take a look at the following template Bicep code. Notice the compact code structure; it is maybe half the size of the typical ARM template. Bicep is smart enough to figure out if resources are dependent on each other. Additionally, Bicep knows it first needs to deploy &lt;strong&gt;appServicePlan&lt;/strong&gt; and automatically adds the &lt;strong&gt;dependsOn&lt;/strong&gt; part when it gets converted from Bicep to an ARM template. Here is the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'ghostinthewires-bicep-webapplication'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;location&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;resourceGroup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;location&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sample&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'ghostinthewires'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="kr"&gt;param&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;sampleCode&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'G1'&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;webApp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Microsoft.Web/sites@2022-01-01'&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;location:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;properties:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;siteConfig:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;metadata:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'MY_TECH_STACK'&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;value:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'dotnetcore'&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;serverFarmId:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;appServicePlan.id&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;resource&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;appServicePlan&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'Microsoft.Web/serverfarms@2022-01-01'&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;location:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;properties:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;sample:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;Tier:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;sample&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;Name:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;sampleCode&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The following code snippet is used by Bicep for deployment of your resources array, with the link to the template and a link to the parameters file if available.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="s2"&gt;"resources"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Microsoft.Resources/deployments"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"apiVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2022-01-01"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"linkedTemplate"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Incremental"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"templateLink"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"contentVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0.0"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="s2"&gt;"parametersLink"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"uri"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://mystorageaccount.blob.core.windows.net/AzureTemplates/newStorageAccount.parameters.json"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"contentVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0.0"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you have your fully formed Bicep file, we can verify that it is syntactically correct by building it. Building a Bicep file transpiles it to an ARM template.&lt;/p&gt;

&lt;p&gt;To build your .bicep file, we can execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;az&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bicep&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;build&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;iac.bicep&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to Convert ARM Templates to Bicep&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bicep can be easily used to convert an ARM template to Bicep code. The command for this is &lt;em&gt;&lt;strong&gt;az bicep decompile&lt;/strong&gt;&lt;/em&gt;. It takes a JSON file as input and attempts to make it into Bicep.&lt;/p&gt;

&lt;p&gt;To decompile ARM template JSON files to Bicep, use Azure CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;az&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bicep&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;decompile&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;AzureARM.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Developers can export the template for a resource group and then pass it directly to the decompile command. Refer to the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;az&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;group&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;export&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"my_resource_group_name"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;AzureARM.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;az&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;bicep&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;decompile&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;--file&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;AzureARM.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;CI/CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If, like me, you're using &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt; for your CI/CD pipeline, there is already a &lt;a href="https://github.com/marketplace/actions/bicep-build" rel="noopener noreferrer"&gt;Bicep action&lt;/a&gt; created by Microsoft Developer Advocate &lt;a href="https://github.com/justinyoo" rel="noopener noreferrer"&gt;Justin Yoo&lt;/a&gt; which you can use to build you bicep file and deploy it to Azure.&lt;/p&gt;

&lt;p&gt;If you are using &lt;a href="https://azure.microsoft.com/en-gb/services/devops/pipelines/" rel="noopener noreferrer"&gt;Azure Pipelines&lt;/a&gt; you can use the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/add-template-to-azure-pipelines?tabs=CLI" rel="noopener noreferrer"&gt;Azure CLI task&lt;/a&gt; as you would do from your laptop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I find Bicep much nicer to work with than ARM Templates and looking at it from a purely Microsoft native standpoint, it would also be my bet for the future. Sure, ARM templates need to support pretty much any feature that Bicep uses, in some way. But I think the main focus from Microsoft, when it comes to the end-user experience, will go into Bicep.&lt;/p&gt;

&lt;p&gt;However, back to my initial question: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Would I replace Terraform with Bicep ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a word, No! &lt;/p&gt;

&lt;p&gt;Terraform is a different beast when compared to ARM and Bicep, even if the syntax is actually quite similar to the one used by Bicep. &lt;/p&gt;

&lt;p&gt;Under the hood, it works in a completely different way, using the Azure REST API instead of talking directly to the Azure Resource Manager. The downside to this is that any new features being added to Azure will first have to be released in the REST API, then in the Go SDK, and finally in the Terraform Azure provider. A chain of events that might take a while.&lt;/p&gt;

&lt;p&gt;Having that said, this is only really an issue if you are using bleeding edge features. If not, pretty much all other features are supported. The flip side to this is obviously that by using a provider-based system, Terraform is able to target a &lt;strong&gt;lot&lt;/strong&gt; of different clouds and systems. Which in turn allows you to use your IaC not only for your Azure resources, but potentially for a bunch of different systems and clouds. Something that ARM/Bicep will never be able to do.&lt;/p&gt;

&lt;p&gt;So, if you need to go outside the realm of Azure, where ARM/Bicep is not going to cut it, I still think Terraform is my favoured option. On the other hand, if you are strictly Azure focused, and have no other clouds/systems you want to integrate with, I think Bicep might still be a great option.&lt;/p&gt;

&lt;p&gt;But keep in mind, there are also other tools such as &lt;a href="https://www.pulumi.com/" rel="noopener noreferrer"&gt;Pulumi&lt;/a&gt;. An IaC tool that also utilizes the benefit of the provider-based architecture, but also adds the ability to use a real programming language when creating your desired state.&lt;/p&gt;

&lt;p&gt;So I would urge you to try out the various options, MVP-style, and see what works for you!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bonus tip&lt;/strong&gt;: If you want to play around a bit more with Bicep, I suggest having a look at the &lt;a href="https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/learn-bicep" rel="noopener noreferrer"&gt;Bicep learning path at Microsoft Docs&lt;/a&gt;. This will give you a deeper introduction to Bicep in an easy to digest format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; 😸&lt;/p&gt;

&lt;p&gt;I regularly post useful content related to Azure, DevOps and Engineering on Twitter. You should consider following me on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>azure</category>
      <category>iac</category>
      <category>bicep</category>
    </item>
    <item>
      <title>How we used DORA metrics to boost deployments, increase automation and more</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Mon, 28 Feb 2022 16:09:25 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/how-we-used-dora-metrics-to-boost-deployments-increase-automation-and-more-dd9</link>
      <guid>https://dev.to/ghostinthewire5/how-we-used-dora-metrics-to-boost-deployments-increase-automation-and-more-dd9</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb5jjncn1n1ekmxpmlni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb5jjncn1n1ekmxpmlni.png" alt="North Star Metrics" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In 2021, I started wondering how I could measure the overall improvements in performance in the engineering department. We were in the early stages of a program of work called North Star. North Star was all about making engineering capability more efficient and flexible in responding to the business needs.&lt;/p&gt;

&lt;p&gt;After researching various options, we decided on the &lt;a href="https://cloud.google.com/blog/products/devops-sre/using-the-four-keys-to-measure-your-devops-performance" rel="noopener noreferrer"&gt;DORA metrics&lt;/a&gt;. They provided us with all the necessary insights to track our success, and benchmark ourselves against a definition of good.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is DORA?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DORA is the acronym for the DevOps Research and Assessment group: they’ve surveyed more than 50,000 technical professionals worldwide to better understand how the technical practices, cultural norms, and management approach affect organisational performance.&lt;/p&gt;

&lt;p&gt;(Take a dive into the &lt;a href="https://cloud.google.com/devops/state-of-devops" rel="noopener noreferrer"&gt;latest DORA Report&lt;/a&gt; and in the book that summarizes the findings: &lt;a href="https://www.amazon.co.uk/Accelerate-Software-Performing-Technology-Organizations/dp/1942788339/ref=sr_1_3?keywords=Accelerate-Building-Performing-Technology-Organizations&amp;amp;qid=1645105665&amp;amp;sr=8-3" rel="noopener noreferrer"&gt;Accelerate&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the metrics we are using?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cycle Time&lt;/strong&gt; - Time between the first commit on a merge request to master and production deployment&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Frequency&lt;/strong&gt; - Deployment Frequency helps identify the rate at which you are delivering new business value to your customers. Smaller deployments have less risk of going wrong and provide an opportunity to deliver value to your customers in shorter iterations, allowing you to learn quicker.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Change Failure Rate&lt;/strong&gt; - For the primary application or service you work on, what percentage of changes to production or released to users result in degraded service (e.g., lead to service impairment or service outage) and subsequently require remediation (e.g., require a hotfix, rollback, fix forward, patch)&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Throughput (Detecting Burnout)&lt;/strong&gt; - Throughput gives us a sense of the team's bandwidth. It gives us a picture into how much work we can typically accomplish. Teams should aim for consistent throughput.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How do we understand the metrics?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cycle Time&lt;/strong&gt; - Reducing amount of blockers for developers&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment Frequency&lt;/strong&gt; - Limiting amount of code going to production at once (limited batch size)&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Change Failure Rate&lt;/strong&gt; - Improving quality focus, part of our &lt;a href="https://dev.to/ghostinthewire5/implementing-a-continuous-testing-strategy-for-devops-121l"&gt;Continuous Testing Strategy&lt;/a&gt;.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Throughput (Detecting Burnout)&lt;/strong&gt; - &lt;a href="https://dev.to/ghostinthewire5/so-what-are-we-doing-about-technical-debt--29m3"&gt;Paying back technical debt&lt;/a&gt; &amp;amp; introducing &lt;a href="https://dev.to/ghostinthewire5/how-firstport-are-leveraging-the-power-of-open-source-to-drive-continuous-code-quality-fd0"&gt;automation&lt;/a&gt; to reduce toil&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Following the rules of lean:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Optimize Work In Progress (&lt;strong&gt;Cycle Time&lt;/strong&gt;)&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Value is only released to production, once it leaves the factory floor (&lt;strong&gt;Deployment Frequency&lt;/strong&gt;)&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Practice &lt;a href="https://en.wikipedia.org/wiki/Kaizen" rel="noopener noreferrer"&gt;Kaizen&lt;/a&gt; (&lt;strong&gt;Change Failure Rate&lt;/strong&gt;)&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Invest in SRE/DevOps automation (&lt;strong&gt;Throughput (Detecting Burnout)&lt;/strong&gt;)&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How are the metrics used internally ?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The dashboard is regularly reviewed by the senior engineering management and are discussed and reviewed in our monthly town hall meeting, and our fortnightly Ops Review. Each team is encouraged to reflect on the metrics as they plan their work, and consider improvements they could introduce.&lt;/p&gt;

&lt;p&gt;The metrics also influence the decisions and prioritisation. Just as importantly, they help us to transform our company culture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In terms of changes measured:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpib3tjli7lhamhc60aun.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpib3tjli7lhamhc60aun.png" alt="Cycle Time" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cycle Time&lt;/strong&gt; as we have not measured it before, the main benefit for us is understanding what we need to improve. In 2021 this actually increased to 18.5 days (due to reasons) but we are currently in the area of 8 days on average for 2022.
 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8itdh2ioz6645vck31zx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8itdh2ioz6645vck31zx.png" alt="Deployment Frequency" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deployment Frequency&lt;/strong&gt; was improved from once a week to once every 1.4 days (x5 Increase).
 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj19ba4ij7lo7sydmkl6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj19ba4ij7lo7sydmkl6.png" alt="Change Failure Rate" width="800" height="443"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Change Failure Rate&lt;/strong&gt; was about 8% before we started, it is now oscillating between ~3-4%. (50% Decrease)
 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faq6rljv9i8ccb5mkibk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faq6rljv9i8ccb5mkibk8.png" alt="Throughput" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Throughput (Detecting Burnout)&lt;/strong&gt; as with Cycle Time we have not measured it before, the main benefit for us is understanding what we need to improve. The throughput per developer per week increased by 93% - however we know why this is (new ways of working, additional code bases etc). We are still keeping a close eye on this to ensure it returns to healthy levels, which so far in 2022, it is!
 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The main cultural changes were:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We have automated the majority of our deployment pipelines (Using &lt;a href="https://dev.to/ghostinthewire5/how-firstport-execute-a-database-as-code-strategy-using-dbup-terraform-github-actions-2b58"&gt;GitHub Actions&lt;/a&gt;).&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We have moved the bulk of our infrastructure management to a standardised Infrastructure as Code (mainly &lt;a href="https://dev.to/ghostinthewire5/how-firstport-manage-github-using-code-stored-in-github-41f6"&gt;Terraform&lt;/a&gt;).&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We have improved our &lt;a href="https://dev.to/ghostinthewire5/implementing-a-continuous-testing-strategy-for-devops-121l"&gt;Quality Assurance &amp;amp; Testing&lt;/a&gt; process.&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We hold the ambition to join the elite performing group of organisations as defined by the State of DevOps report. Each day brings us closer to that goal.&lt;br&gt;
 &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are our future plans?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On the technical side, we are working to improve automation of the CI/CD pipelines, testing process &amp;amp; Observability. &lt;/p&gt;

&lt;p&gt;On the DevOps/DORA culture side, we are providing regular talks and training to wider audiences (not only engineering), to establish DORA as a reference point in future product development. We are also making it a key point of our new consolidated engineering strategy.&lt;/p&gt;

&lt;p&gt;I’ve found the DORA metrics helped us improve our software development and delivery processes. With these findings, organisations can make informed adjustments in their process workflows, automation, team composition, tools, and more. I recommend you try this in your organisation too.&lt;/p&gt;

&lt;p&gt;Further reading:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.amazon.co.uk/Phoenix-Project-Devops-Helping-Business/dp/1942788290/ref=sr_1_1?crid=3DI133RHO9CSD&amp;amp;keywords=the+phoenix+project&amp;amp;qid=1645110334&amp;amp;sprefix=the+phoenix+project%2Caps%2C144&amp;amp;sr=8-1" rel="noopener noreferrer"&gt;The Phonenix Project&lt;/a&gt; by Gene Kim, Kevin Behr and George Spafford&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.amazon.co.uk/Goal-Process-Ongoing-Improvement/dp/0566086654/ref=sr_1_1?crid=29JBJD9O8WTDO&amp;amp;keywords=the+goal&amp;amp;qid=1645110393&amp;amp;sprefix=the+goal%2Caps%2C125&amp;amp;sr=8-1" rel="noopener noreferrer"&gt;The Goal&lt;/a&gt;: A Process of Ongoing Improvement by Eliyahu Goldratt and Jeff Cox&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.amazon.co.uk/Unicorn-Project-Disruption-Redshirts-Overthrowing/dp/1942788762/ref=sr_1_1?crid=1RLFHFMXHNVQ2&amp;amp;keywords=the+unicorn+project&amp;amp;qid=1645110428&amp;amp;sprefix=the+unicorn+projec%2Caps%2C98&amp;amp;sr=8-1" rel="noopener noreferrer"&gt;The Unicorn Project&lt;/a&gt; by Gene Kim et al&lt;br&gt;
 &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.amazon.co.uk/Devops-Handbook-World-Class-Reliability-Organizations/dp/1950508404/ref=sr_1_1?crid=10337Z8FXTW4G&amp;amp;keywords=the+devops+handbook&amp;amp;qid=1645111014&amp;amp;sprefix=the+devops+handbook%2Caps%2C98&amp;amp;sr=8-1" rel="noopener noreferrer"&gt;The DevOps Handbook&lt;/a&gt; by Patrick Debois et al&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; 😸&lt;/p&gt;

&lt;p&gt;I regularly post useful content related to Azure, DevOps and Engineering on Twitter. You should consider following me on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>leadership</category>
      <category>agile</category>
    </item>
    <item>
      <title>Implementing a Continuous Testing strategy for DevOps</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Thu, 17 Feb 2022 12:09:48 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/implementing-a-continuous-testing-strategy-for-devops-121l</link>
      <guid>https://dev.to/ghostinthewire5/implementing-a-continuous-testing-strategy-for-devops-121l</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femdipr07ixn5awbbs242.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Femdipr07ixn5awbbs242.png" alt="Continious Testing" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, before I jump into our Continuous Testing Strategy, I want to cover off why we want to enhance our testing capability.&lt;/p&gt;

&lt;p&gt;At the heart of it, it is to reduce the risk of incidents that could cause loss to the business.&lt;/p&gt;

&lt;p&gt;As you can see in the above image, the cost to the business increases exponentially the later a bug is found&lt;/p&gt;

&lt;p&gt;The expected outcomes we are aiming for are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; To support rapid deployment &amp;amp; decision making&lt;br&gt;
&lt;strong&gt;2.&lt;/strong&gt; To manage and maintain an effective risk appetite&lt;br&gt;
&lt;strong&gt;3.&lt;/strong&gt; To define clear roles and accountability for testing&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe23qkpgpshrjcpmd74kg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe23qkpgpshrjcpmd74kg.png" alt="Testing Pyramid" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will use the Test Automation Pyramid as a strategy guide to planning our DevOps Testing Strategy. &lt;/p&gt;

&lt;p&gt;Continuous Testing needs to be a key element in our DevOps testing strategy if we want to successfully implement a DevOps pipeline. &lt;/p&gt;

&lt;p&gt;Continuous Testing, which is often called shift-left testing, is an approach to software and system testing in which testing is performed earlier in the software development lifecycle, with the goal of increasing quality, shortening long test cycles and reducing the possibility of software defects making their way into production code.&lt;/p&gt;

&lt;p&gt;A best practice is to use test automation to eliminate much of the risk that comes with continuous integration and to get quick feedback on application quality. Pairing continuous integration with test automation enables teams to easily test every new code iteration and reduces the risk of errors occurring in Production. &lt;/p&gt;

&lt;p&gt;And as you move further up that pyramid, toward manual tests things generally get slower &amp;amp; more expensive. &lt;/p&gt;

&lt;p&gt;However, we have to be careful what we choose to automate &amp;amp; where, as this is not always the case, which is why we will be taking a risk based approach &amp;amp; automating where it is sensible to do so.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4abb7j7xclb1c4c5c9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4abb7j7xclb1c4c5c9j.png" alt="Test Strategy" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And this leads me on to how do we get there ?&lt;/p&gt;

&lt;p&gt;I see this as a four-step approach:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Setting the Proper Foundations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We needed to understand how the business were currently ensuring the quality of systems and business processes. When and how are applications being tested and how are end-to-end business processes being validated when new technology is deployed? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:  Start with a Project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We wanted to take steps to prove the new approach before making a large-scale investment, and the best way to do that was to identify a project with which to demonstrate the value of testing. &lt;/p&gt;

&lt;p&gt;The identified project has a simplified version of the testing capability we want to create, this has been successful and will help pave the way and drive demand for more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:  Build a Testing Centre of Excellence Programme&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We want to establish automated business process validation as a competency. To achieve business value and innovation, the TCOE must be managed the way any business asset is: as an integrated set of repeatable activities focused on producing a positive business outcome. Building a TCOE entails adopting an approach and a technology across the business, but it͛’s more than skill alone. It includes people, processes, technology &amp;amp; tooling – As shown in the image above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Adopt across the business&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The single most important aspect to the success of a TCOE is strong executive sponsorship and an executive champion. To move the TCOE forward by effecting change and quantifying the value one project at a time and continue to demonstrate its successes and value through key projects, strong communications and measurable results.  &lt;/p&gt;

&lt;p&gt;To keep working on the improvements to generate better ROI and establish an environment of quality across the entirety of the business.&lt;/p&gt;

&lt;p&gt;Hope this overview of how we plan to implement a Continuous Testing Strategy will help you in your journey as an Engineering Manager / Test Manager 😇&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; 😸&lt;/p&gt;

&lt;p&gt;I regularly post useful content related to Azure, DevOps and Engineering on Twitter. You should consider following me on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>testing</category>
      <category>devops</category>
      <category>devrel</category>
      <category>testautomation</category>
    </item>
    <item>
      <title>So what are we doing about Technical Debt ?</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Thu, 17 Feb 2022 10:34:43 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/so-what-are-we-doing-about-technical-debt--29m3</link>
      <guid>https://dev.to/ghostinthewire5/so-what-are-we-doing-about-technical-debt--29m3</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fdnukgivh2w6nvspb9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fdnukgivh2w6nvspb9v.png" alt="Ward Cunningham" width="652" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For those of you not familiar with the term “Technical Debt” – this is a concept introduced by &lt;a href="https://en.wikipedia.org/wiki/Ward_Cunningham" rel="noopener noreferrer"&gt;Ward Cunningham&lt;/a&gt; (who is an American software Developer &amp;amp; a co-author of the Manifesto for Agile Software Development.&lt;/p&gt;

&lt;p&gt;This can be summarised as: the incremental cost and loss of agility to a company as a result of prior decisions that were made to save time or money when implementing new systems or maintaining existing ones.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zy3vis5k9lyqjf5zi0t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5zy3vis5k9lyqjf5zi0t.png" alt="Technical Debt" width="723" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Financial debt is a term that most people are well versed in, yet technical debt can have similarly crippling consequences due to the hidden costs that it can incur.&lt;/p&gt;

&lt;p&gt;Just like Financial Debt - Technical Debt should be reserved for cases when people have made a considered decision to adopt a design strategy that isn't sustainable in the longer term, but yields a short term benefit. The point is that the debt yields value sooner, but needs to be paid off as soon as possible.&lt;/p&gt;

&lt;p&gt;So, how are we going to deal with our Technical Debt ?&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqht31ita14ash1sa47ux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqht31ita14ash1sa47ux.png" alt="Tech Debt Optimization" width="717" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, I've split this into Management &amp;amp; Prevention&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First step is to figure out what and how much technical debt we have&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Just like financial debt, that has a seniority of what gets paid back first. Technical debt has a similar pattern of seniority; to begin with, we must start with our mission-critical systems. What technical debt do they have? Then look at the wider ecosystem—better put, what technical debt between your systems is causing us expense?&lt;/p&gt;

&lt;p&gt;Keep this simple! - Get our top ten ideas and put them into a 2x2 matrix: easy/hard to pay down on one axis and degree of benefits on the other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Next, we need to decide what to do&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once we know what technical debt we have, we now need to decide how to deal with it. There are many options to take.&lt;/p&gt;

&lt;p&gt;It may ultimately be best to do nothing. For debt that is either assessed to be “small” or with a “low interest rate,” it may be optimal to just leave it—likewise, if there is significant “prepayment penalty” of paying it off early. &lt;br&gt;
There could also be strategic advantages too. Being one version behind and staying there is usually fine and sometimes has the advantage of letting kinks get worked out on someone else’s money.&lt;/p&gt;

&lt;p&gt;Paying back or reducing technical debt will involve replacing systems and taking the cost hit. This can either be done immediately, or over time through a process of gradual improvements. As with financial debt, there are creative ways in which you can “refinance” technical debt, with outsourcing the maintenance being one such way.&lt;/p&gt;

&lt;p&gt;A good example is cloud-based software and hardware services – This brings in a comparison to the popularity of lease-based finance. Using cloud services is also an effective tool for reducing technical debt, both in removing CAPEX requirements and shifting the development focus onto the cloud provider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Next, we need to create a payment plan&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The main thing to remember here is to not get overwhelmed by the cost of reducing our technical debt and don’t try to pay it off all at once. This would be an ambitious exercise that could overwhelm an organization of any size or balance sheet.&lt;/p&gt;

&lt;p&gt;Again, going back to the financial comparisons, we need to have a mentality of paying off the credit card with the highest interest rate first. This simply means attacking high value/low effort activities first.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;u&gt;Managing Technical Debt Going Forward&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once we’ve established our baseline and plan of attack, we are going to want to both preserve that visibility and prevent new debt from creeping in. Think of the exercise as a fresh start and a chance to implement best practices to prevent issues from ever escalating again in the future.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Establish best practices in coding and documentation, peer reviews, automated testing etc. Automate processes to perform regression tests / source code analysis. Leverage off the shelf tools like GitHub and SonarQube.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; Use tracking tools to collect metrics –i.e. defect rate, time spent to refactor, and feature development over time, all of which will drive prioritization decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; Re-Train Our Underwriters - In simple terms, change management’s job is to ensure that new changes to the company’s technology system don’t impact other systems. They do this by ensuring that the new system complies with standardized methods and procedures. We should be using this process to prevent new debt from being introduced (or at least identify it) – Architecture Review Board to be stood up etc.&lt;/p&gt;

&lt;p&gt;Hope this overview of how we plan to tackle technical debt will help you in your journey as an Engineering Manager / Developer 😇&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; 😸&lt;/p&gt;

&lt;p&gt;I regularly post useful content related to Azure, DevOps and Engineering on Twitter. You should consider following me on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>technicaldebt</category>
      <category>leadership</category>
      <category>devrel</category>
      <category>devops</category>
    </item>
    <item>
      <title>Would you like an OpenSource Engineering Handbook Template ? 🚀</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Wed, 16 Feb 2022 16:42:24 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/would-you-like-an-opensource-engineering-handbook-template--176k</link>
      <guid>https://dev.to/ghostinthewire5/would-you-like-an-opensource-engineering-handbook-template--176k</guid>
      <description>&lt;p&gt;Launching 🚀&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔥 - An OpenSource Engineering Handbook Template for Engineering Teams who want to go further, faster 🚀&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;a href="https://github.com/ghostinthewires/Engineering-Handbook" rel="noopener noreferrer"&gt;Github Repo Link&lt;/a&gt;&lt;/u&gt;&lt;/strong&gt; ⭐&lt;/p&gt;

&lt;p&gt;Creating an Engineering Handbook Website from scratch is time-consuming and that's why I have created an OpenSource Engineering Handbook Website Template so Engineering Managers don't have to build their website from scratch. 💯&lt;/p&gt;

&lt;p&gt;Instead, Engineering Managers can focus on building better Engineering Teams without worrying about the Engineering Handbook Website itself. 🤘&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy to Setup ✅&lt;/li&gt;
&lt;li&gt;Free to Use ( OpenSource ) ✅&lt;/li&gt;
&lt;li&gt;Fully Responsive ✅&lt;/li&gt;
&lt;li&gt;Super Fast and Optimized for SEO ✅&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqt827j5u0x34t5wk4x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqt827j5u0x34t5wk4x2.png" alt="Lighthouse" width="530" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The project is made with &lt;strong&gt;HTML&lt;/strong&gt;, &lt;strong&gt;CSS&lt;/strong&gt;, some &lt;strong&gt;JavaScript&lt;/strong&gt;, and &lt;strong&gt;Jekyll&lt;/strong&gt;. Don't worry if you don't know any, I have provided the instructions on how to use the template and set up your own Engineering Handbook in the &lt;strong&gt;README.md&lt;/strong&gt; file inside the Github Repository.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check out the &lt;a href="https://github.com/ghostinthewires/Engineering-Handbook" rel="noopener noreferrer"&gt;Github Repository&lt;/a&gt; 👨‍💻&lt;/li&gt;
&lt;li&gt;Drop a Github Star ⭐ 😉&lt;/li&gt;
&lt;li&gt;Fork the Repository 🍴&lt;/li&gt;
&lt;li&gt;Start using it for your own Engineering Handbook 🙌&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;&lt;a href="https://ghostinthewires.github.io/Engineering-Handbook/" rel="noopener noreferrer"&gt;Demo Link&lt;/a&gt;&lt;/strong&gt; of the template shows you what it will look like ✅&lt;/p&gt;

&lt;p&gt;Hope this Engineering Handbook Template will help you in your journey as an Engineering Manager / Developer 😇&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt; 😸&lt;/p&gt;

&lt;p&gt;I regularly post useful content related to Azure, DevOps and Engineering on Twitter. You should consider following me on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>githunt</category>
      <category>devops</category>
      <category>devrel</category>
    </item>
    <item>
      <title>How FirstPort are leveraging the power of Open Source to drive continuous code quality</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Mon, 23 Aug 2021 14:47:59 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/how-firstport-are-leveraging-the-power-of-open-source-to-drive-continuous-code-quality-fd0</link>
      <guid>https://dev.to/ghostinthewire5/how-firstport-are-leveraging-the-power-of-open-source-to-drive-continuous-code-quality-fd0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1fp4jhzbjv26f2mjli7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1fp4jhzbjv26f2mjli7.png" alt="SonarQube" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;As many of you will already know, I am Head of Technology at &lt;a href="https://www.firstport.co.uk/" rel="noopener noreferrer"&gt;FirstPort&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A key part of my role is delivering FirstPort’s vision of ‘People First’ technology. To do this, it is imperative that I select the right technology to underpin the delivery of services that help make customers’ lives easier.&lt;/p&gt;

&lt;p&gt;Today I want to talk about my selection of &lt;a href="https://www.sonarqube.org/" rel="noopener noreferrer"&gt;SonarQube&lt;/a&gt; (and a whole host of other cool tools &amp;amp; services such as GitHub Actions, Terraform, Caddy, Let's Encrypt, Docker &amp;amp; more)&lt;/p&gt;

&lt;p&gt;SonarQube is the leading tool for continuously inspecting the Code Quality and Security of your codebases and guiding development teams during Code Reviews &lt;/p&gt;

&lt;h2&gt;
  
  
  Building our Container Image
&lt;/h2&gt;

&lt;p&gt;I did not want to use &lt;a href="https://www.sonarqube.org/" rel="noopener noreferrer"&gt;SonarCloud&lt;/a&gt; nor did I want to host this on a VM. So I decided on ACI (Azure Container Instances) &lt;/p&gt;

&lt;p&gt;However, when trying to use ACI with an external database I found that any version of SonarQube after 7.7 throws an error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I found this was because SonarQube uses an embedded &lt;a href="https://www.elastic.co/elasticsearch/" rel="noopener noreferrer"&gt;Elasticsearch&lt;/a&gt;, therefore you need to ensure that your Docker host configuration complies with the &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_set_vm_max_map_count_to_at_least_262144" rel="noopener noreferrer"&gt;Elasticsearch production mode requirements&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the requirements above suggest, in order to fix this it would mean changing the host OS settings to increase the max_map_count, on a Linux OS this would be changing the /etc/sysctl.conf file to update the max_map_count&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vm.max_map_count=262144
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem with ACI is that there is no access to the host, so how can the latest SonarQube (latest version at the time of writing was 9.0.1) be ran in ACI if this cannot be changed.&lt;/p&gt;

&lt;p&gt;In this blog I am going to detail the way we run SonarQube in Azure Container Instances with an external Azure SQL database.&lt;/p&gt;

&lt;p&gt;Here at FirstPort, we also use &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; to build the Azure infrastructure and of course GitHub Actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;The first thing is to address the max_map_count issue, for this we need a sonar.properties file that contains the following setting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setting provides the ability to disable memory mapping in elastic search, which is needed when running SonarQube inside containers where you cannot change the hosts vm.max_map_count. (See &lt;a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.9/index-modules-store.html#allow-mmap" rel="noopener noreferrer"&gt;Elasticsearch documentation&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Now we have our sonar.properties file we need to create a custom container so we can add that into the setup. A small dockerfile can achieve this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM sonarqube:9.0.1-community
COPY sonar.properties /opt/sonarqube/conf/sonar.properties
RUN chown sonarqube:sonarqube /opt/sonarqube/conf/sonar.properties
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This dockerfile is now ready to be built using Docker and pushed to an ACR (Azure Container Registry). &lt;/p&gt;

&lt;p&gt;For more info on how to build a container and/or push to an ACR then have a look at the &lt;a href="https://docs.docker.com/engine/reference/commandline/build/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; and &lt;a href="https://docs.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli?tabs=azure-cli" rel="noopener noreferrer"&gt;Microsoft&lt;/a&gt; documentation which have easy to follow instructions.&lt;/p&gt;

&lt;p&gt;We first build the ACR using Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_container_registry" "acr" {
  name                = join("", [var.product, "acr", var.location, var.environment])
  resource_group_name = azurerm_resource_group.rg.name
  location            = azurerm_resource_group.rg.location
  admin_enabled       = true
  sku                 = "Standard"

  tags = local.tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then use our standard workflow for running Terraform in &lt;a href="https://dev.to/ghostinthewire5/how-firstport-manage-github-using-code-stored-in-github-41f6#github-actions"&gt;GitHub Actions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we use GitHub Actions to build &amp;amp; push our container image to the ACR setup above. Note the &lt;code&gt;cd container&lt;/code&gt; line. This is because we have our dockerfile and sonar.properties files in a folder called container (the sonar folder contains all the Terraform files for the rest of the infrastructure):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxufscn5gpiie9203lgu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxufscn5gpiie9203lgu7.png" alt="repo" width="324" height="286"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Build Container Image &amp;amp; Push to ACR
on:
  workflow_dispatch:

jobs:
  build:
    name: Build Container &amp;amp; Push to ACR
    runs-on: ubuntu-latest

    steps:

      - name: Checkout
        uses: actions/checkout@master

      - name: ACR build
        uses: azure/docker-login@v1
        with:
          login-server: acrname.azurecr.io
          username: acrusername
          password: ${{ secrets.REGISTRY_PASSWORD }}

      - run: |
          cd container &amp;amp;&amp;amp; docker build . -t acrname.azurecr.io/acrrepo:${{ github.sha }}
          docker push acrname.azurecr.io/acrrepo:${{ github.sha }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Building the SonarQube Infrastructure
&lt;/h2&gt;

&lt;p&gt;So now that we have a container image uploaded to ACR we can look at the rest of the configuration.&lt;/p&gt;

&lt;p&gt;There are a number of parts to create:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;File shares&lt;/li&gt;
&lt;li&gt;External Database&lt;/li&gt;
&lt;li&gt;Container Group

&lt;ul&gt;
&lt;li&gt;SonarQube&lt;/li&gt;
&lt;li&gt;Reverse Proxy&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;At FirstPort our default is to use IaC (Infrastructure as Code), so I will show you how I use Terraform to configure the SonarQube infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  File Shares
&lt;/h2&gt;

&lt;p&gt;The SonarQube &lt;a href="https://docs.sonarqube.org/latest/setup/install-server/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; mentions setting up volume mounts for data, extensions and logs, for this I use an Azure Storage Account and Shares.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_storage_account" "storage" {
  name                     = join("", [var.product, "strg", var.location, var.environment])
  location                 = azurerm_resource_group.rg.location
  resource_group_name      = azurerm_resource_group.rg.name
  account_tier             = "Standard"
  account_replication_type = "RAGZRS"
  min_tls_version          = "TLS1_2"
  tags                     = local.tags
}

resource "azurerm_storage_share" "data-share" {
  name                 = "data"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = 50
}

resource "azurerm_storage_share" "extensions-share" {
  name                 = "extensions"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = 50
}

resource "azurerm_storage_share" "logs-share" {
  name                 = "logs"
  storage_account_name = azurerm_storage_account.storage.name
  quota                = 50
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  External Database
&lt;/h2&gt;

&lt;p&gt;For the external database part I am using Azure SQL Server, a SQL Database and setup a firewall rule to allow azure services to access the database.&lt;/p&gt;

&lt;p&gt;By using the random_password resource to create a SQL password no secrets are included and there is no need to know the password as long as the SonarQube Server does.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_mssql_server" "sql_server" {
  name                         = join("", [var.product, "sql", var.location, var.environment])
  location                     = azurerm_resource_group.rg.location
  resource_group_name          = azurerm_resource_group.rg.name
  version                      = "12.0"
  administrator_login          = "sonaradmin"
  administrator_login_password = random_password.sql_admin_password.result
  minimum_tls_version          = "1.2"

  identity {
    type = "SystemAssigned"
  }

  tags = local.tags
}

resource "azurerm_mssql_server_transparent_data_encryption" "sql_tde" {
  server_id = azurerm_mssql_server.sql_server.id
}

resource "azurerm_sql_firewall_rule" "sql_firewall_azure" {

  name                = "AllowAccessToAzure"
  resource_group_name = azurerm_resource_group.rg.name
  server_name         = azurerm_mssql_server.sql_server.name
  start_ip_address    = "0.0.0.0"
  end_ip_address      = "0.0.0.0"
}

resource "azurerm_mssql_database" "sonar" {
  name      = "sonar"
  server_id = azurerm_mssql_server.sql_server.id
  collation = "SQL_Latin1_General_CP1_CS_AS"
  sku_name  = "S2"

  tags = local.tags
}

resource "random_password" "sql_admin_password" {
  length           = 32
  special          = true
  override_special = "/@\" "
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Container Group
&lt;/h2&gt;

&lt;p&gt;Setting up the container group requires credentials to access to the Azure Container Registry to run the custom SonarQube container. Using the data resource allows retrieval of the details without passing them as variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "azurerm_container_registry" "registry" {
  name                = "acrname"
  resource_group_name = "acr-rg-name"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this setup we are going to have two containers - the custom SonarQube container and a Caddy container. &lt;a href="https://caddyserver.com/" rel="noopener noreferrer"&gt;Caddy&lt;/a&gt; can be used as a reverse proxy and is small, lightweight and provides management of certificates automatically with &lt;a href="https://letsencrypt.org/" rel="noopener noreferrer"&gt;Let’s Encrypt&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The SonarQube container configuration connects the SQL Database and Azure Storage Account Shares configured earlier.&lt;/p&gt;

&lt;p&gt;The Caddy container configuration sets up the reverse proxy to the SonarQube instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "azurerm_container_group" "container" {
  name                = "containergroupname"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  ip_address_type     = "public"
  dns_name_label      = "acrdnslabel"
  os_type             = "Linux"
  restart_policy      = "OnFailure"
  tags                = local.tags

  image_registry_credential {
    server   = data.azurerm_container_registry.registry.login_server
    username = data.azurerm_container_registry.registry.admin_username
    password = data.azurerm_container_registry.registry.admin_password
  }

  container {
    name   = "sonarqube-server"
    image  = "${data.azurerm_container_registry.registry.login_server}/acrrepo:latest"
    cpu    = "2"
    memory = "4"
    environment_variables = {
      WEBSITES_CONTAINER_START_TIME_LIMIT = 400
    }
    secure_environment_variables = {
      SONARQUBE_JDBC_URL      = "jdbc:sqlserver://${azurerm_mssql_server.sql_server.name}.database.windows.net:1433;database=${azurerm_mssql_database.sonar.name};user=${azurerm_mssql_server.sql_server.administrator_login}@${azurerm_mssql_server.sql_server.name};password=${azurerm_mssql_server.sql_server.administrator_login_password};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
      SONARQUBE_JDBC_USERNAME = azurerm_mssql_server.sql_server.administrator_login
      SONARQUBE_JDBC_PASSWORD = random_password.sql_admin_password.result
    }

    ports {
      port     = 9000
      protocol = "TCP"
    }

    volume {
      name                 = "data"
      mount_path           = "/opt/sonarqube/data"
      share_name           = "data"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "extensions"
      mount_path           = "/opt/sonarqube/extensions"
      share_name           = "extensions"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }

    volume {
      name                 = "logs"
      mount_path           = "/opt/sonarqube/logs"
      share_name           = "logs"
      storage_account_name = azurerm_storage_account.storage.name
      storage_account_key  = azurerm_storage_account.storage.primary_access_key
    }
  }

  container {
    name     = "caddy-ssl-server"
    image    = "caddy:latest"
    cpu      = "1"
    memory   = "1"
    commands = ["caddy", "reverse-proxy", "--from", "acrrepo.azurecontainer.io", "--to", "localhost:9000"]

    ports {
      port     = 443
      protocol = "TCP"
    }

    ports {
      port     = 80
      protocol = "TCP"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final Configuration
&lt;/h2&gt;

&lt;p&gt;Just follow the SonarQube &lt;a href="https://docs.sonarqube.org/latest/analysis/github-integration/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for your specific source control. Then add the required steps to your application code GitHub Actions workflows, then you will see something like this within the SonarQube dashboard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq7zku00ri66jwlxer3i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkq7zku00ri66jwlxer3i.png" alt="SonarQube" width="800" height="598"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Once the container instance is running you probably do not want it running 24/7 so using an Azure Function or Logic App to stop and start the instance when its not needed will definitely save money. I plan to run an Azure Logic App to start the container at 08:00 and stop the container at 18:00 Monday to Friday (I can feel another blog post coming on!)&lt;/p&gt;

&lt;p&gt;I hope I could help you learn something new today, and share how we do things here at &lt;a href="https://www.firstport.co.uk/" rel="noopener noreferrer"&gt;FirstPort&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Any questions, get in touch on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1190197872404377600-946" src="https://platform.twitter.com/embed/Tweet.html?id=1190197872404377600"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1190197872404377600-946');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1190197872404377600&amp;amp;theme=dark"
  }



&lt;/p&gt;

</description>
      <category>azure</category>
      <category>github</category>
      <category>devops</category>
      <category>terraform</category>
    </item>
    <item>
      <title>How FirstPort execute a Database as Code Strategy using DbUp, Terraform &amp; GitHub Actions</title>
      <dc:creator>Andrew</dc:creator>
      <pubDate>Tue, 20 Jul 2021 12:32:17 +0000</pubDate>
      <link>https://dev.to/ghostinthewire5/how-firstport-execute-a-database-as-code-strategy-using-dbup-terraform-github-actions-2b58</link>
      <guid>https://dev.to/ghostinthewire5/how-firstport-execute-a-database-as-code-strategy-using-dbup-terraform-github-actions-2b58</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4hi3v2yw574iby71pjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4hi3v2yw574iby71pjb.png" alt="FirstPort GitHub" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Introduction

&lt;ul&gt;
&lt;li&gt;Goals&lt;/li&gt;
&lt;li&gt;DbUp&lt;/li&gt;
&lt;li&gt;HTML report&lt;/li&gt;
&lt;li&gt;Create the DbUp console application&lt;/li&gt;
&lt;li&gt;Program.cs file&lt;/li&gt;
&lt;li&gt;GitHub Actions Configuration&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;As many of you will already know, I am Head of Technology at &lt;a href="https://www.firstport.co.uk/" rel="noopener noreferrer"&gt;FirstPort&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A key part of my role is delivering FirstPort’s vision of ‘People First’ technology. To do this, it is imperative that I select the right technology to underpin the delivery of services that help make customers’ lives easier.&lt;/p&gt;

&lt;p&gt;Today I want to talk about my selection of &lt;a href="https://dbup.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;DbUp&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;DbUp is an open-source .NET library that helps you to deploy changes to SQL Server databases. It tracks which SQL scripts have been run already, and runs the change scripts that are needed to get your database up to date.&lt;/p&gt;

&lt;p&gt;You could ask the question why &lt;a href="https://dbup.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;DbUp&lt;/a&gt; and not &lt;a href="https://docs.microsoft.com/en-us/ef/core/managing-schemas/migrations/?tabs=dotnet-core-cli" rel="noopener noreferrer"&gt;EF Migrations&lt;/a&gt;? &lt;/p&gt;

&lt;p&gt;I have used a lot of them and I have to say that DbUp seems to me the most pure solution. I don’t like C# "wrappers" to generate SQL for me. DDL is an easy language and I don’t think we need a special tool for generating it.&lt;/p&gt;

&lt;p&gt;Here at FirstPort, we also use &lt;a href="https://www.terraform.io/" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; to build the Azure SQL infrastructure and of course GitHub Actions. The focus of today however will mainly be on DbUp and the GitHub Action workflows. &lt;/p&gt;

&lt;h2&gt;
  
  
  Goals
&lt;/h2&gt;

&lt;p&gt;There are a number of goals we are aiming for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We want it to be simple&lt;/li&gt;
&lt;li&gt;We want it to be repeatable&lt;/li&gt;
&lt;li&gt;We want to use the same process for dev, QA and production deployments of our changes&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  DbUp
&lt;/h2&gt;

&lt;p&gt;At its core, DbUp is a script runner. Changes made to the database are done via a script:&lt;/p&gt;

&lt;p&gt;Script001_AddTableX.sql&lt;br&gt;
Script002_AddColumnFirstPortIdToTableX.sql&lt;br&gt;
Script003_AddColumnCustomerIdToTableX.sql&lt;/p&gt;

&lt;p&gt;DbUp runs through a console application you write yourself, so you control which options to use, and you don’t need a lot of code.&lt;/p&gt;

&lt;p&gt;You bundle those scripts and tell DbUp to run them. It compares that list against a list stored in the destination database. Any scripts not in that destination’s database list will be run. The scripts are executed in alphabetical order, and the results of each script are displayed on the console. Very simple to implement and understand.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Checking whether journal table exists..
Journal table does not exist
Is upgrade required: True
Beginning database upgrade
Checking whether journal table exists..
Journal table does not exist
Executing Database Server script 'DbUpLeaseExtract.BeforeDeploymentScripts.001_CreateLeaseExtractSchemaIfNotExists.sql'
Checking whether journal table exists..
Creating the [SchemaVersions] table
The [SchemaVersions] table has been created
Upgrade successful
Success!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That works great when you’re deploying to a development or test environment. At FirstPort we prefer our DBAs to approve scripts before going to production. Maybe a staging or pre-production environment as well. This approval process is essential, especially when you first start deploying databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  HTML report
&lt;/h2&gt;

&lt;p&gt;Migration scripts are a double-edged sword. You have total control, which gives you great power. However, it’s also easy to mess up. It all depends on the type of change being done and the SQL skills of the writer. The DBA's trust in the process will be low when inexperienced C# developers are writing these migration scripts.&lt;/p&gt;

&lt;p&gt;At FirstPort we added some code to generate a HTML report. It is an extension method where you give it the path of the report you want to generate. That means this section goes from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                var result = upgrader.PerformUpgrade();

                // Display the result
                if (result.Successful)
                {
                    Console.ForegroundColor = ConsoleColor.Green;
                    Console.WriteLine("Success!");
                }
                else
                {
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine(result.Error);
                    Console.WriteLine("Failed!");
                }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            if (args.Any(a =&amp;gt; a.StartsWith("--PreviewReportPath", StringComparison.InvariantCultureIgnoreCase)))
            {
                // Generate a preview file so GitHub Actions can generate an artifact for approvals
                var report = args.FirstOrDefault(x =&amp;gt; x.StartsWith("--PreviewReportPath", StringComparison.OrdinalIgnoreCase));
                report = report.Substring(report.IndexOf("=") + 1).Replace(@"""", string.Empty);

                var fullReportPath = Path.Combine(report, "UpgradeReport.html");

                Console.WriteLine($"Generating the report at {fullReportPath}");

                upgrader.GenerateUpgradeHtmlReport(fullReportPath);
            }
            else
            {
                var result = upgrader.PerformUpgrade();

                // Display the result
                if (result.Successful)
                {
                    Console.ForegroundColor = ConsoleColor.Green;
                    Console.WriteLine("Success!");
                }
                else
                {
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine(result.Error);
                    Console.WriteLine("Failed!");
                }
            }
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code will generate a report containing all the scripts that will be run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj7elioj8d6wxf16r3qh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj7elioj8d6wxf16r3qh.PNG" alt="DbUp Delta Report" width="721" height="693"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the DbUp console application
&lt;/h2&gt;

&lt;p&gt;With the above features, we put together a .NET Core DbUp console application to deploy to Azure SQL. Then we will put together a process in GitHub Actions to run that console application.&lt;/p&gt;

&lt;p&gt;I chose .NET Core over .NET Framework because it could be built and run anywhere. DbUp is a .NET Standard library. DbUp will work just as well in a .NET Framework application.&lt;/p&gt;

&lt;p&gt;Let’s fire up our IDE of choice and create a .NET Core console application. I am using &lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;VSCode&lt;/a&gt; to build this console application. I prefer it over full blown Visual Studio.&lt;/p&gt;

&lt;p&gt;The console application needs some scripts to deploy. I’m going to add three folders and populate them with some script files:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dogojzy6jp5a07l4yal.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dogojzy6jp5a07l4yal.PNG" alt="VSCode" width="357" height="135"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I recommend you add a prefix, such as 001, 002, etc., to the start of your script file name. DbUp runs the scripts in alphabetical order, and that prefix helps ensure the scripts are run in the correct order.&lt;/p&gt;

&lt;p&gt;By default, .NET will not include those scripts files when the console application is built, and we want to include those script files as embedded resources. Thankfully, we can easily add a reference to those files by including this code in the .csproj file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    &amp;lt;ItemGroup&amp;gt;
        &amp;lt;EmbeddedResource Include="BeforeDeploymentScripts\*.sql" /&amp;gt;
        &amp;lt;EmbeddedResource Include="DeploymentScripts\*.sql" /&amp;gt;
        &amp;lt;EmbeddedResource Include="PostDeploymentScripts\*.sql" /&amp;gt;
    &amp;lt;/ItemGroup&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The entire file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;Project Sdk="Microsoft.NET.Sdk"&amp;gt;

    &amp;lt;PropertyGroup&amp;gt;
        &amp;lt;OutputType&amp;gt;Exe&amp;lt;/OutputType&amp;gt;
        &amp;lt;TargetFramework&amp;gt;net5.0&amp;lt;/TargetFramework&amp;gt;
    &amp;lt;/PropertyGroup&amp;gt;

    &amp;lt;ItemGroup&amp;gt;
        &amp;lt;EmbeddedResource Include="BeforeDeploymentScripts\*.sql" /&amp;gt;
        &amp;lt;EmbeddedResource Include="DeploymentScripts\*.sql" /&amp;gt;
        &amp;lt;EmbeddedResource Include="PostDeploymentScripts\*.sql" /&amp;gt;
    &amp;lt;/ItemGroup&amp;gt;

    &amp;lt;ItemGroup&amp;gt;
      &amp;lt;PackageReference Include="dbup-sqlserver" Version="4.5.0" /&amp;gt;
    &amp;lt;/ItemGroup&amp;gt;

&amp;lt;/Project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Program.cs file
&lt;/h2&gt;

&lt;p&gt;The final step to get this application going is to add in the necessary code in the Program.cs to call DbUp. The application accepts parameters from the command-line, and GitHub Actions will be configured to send in the following parameters:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ConnectionString&lt;/strong&gt;: For this demo, we are sending this as a parameter instead of storing it in the config file.&lt;br&gt;
&lt;strong&gt;PreviewReportPath&lt;/strong&gt;: The full path to save the preview report. The full path parameter is optional. When it is sent in we generate a preview HTML report for GitHub Actions to upload to Azure Blob storage. When it is not sent in, the code will do the actual deployment.&lt;br&gt;
Let’s start by pulling the connection string from the command-line argument:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;         static void Main(string[] args)
        {
            var connectionString = args.FirstOrDefault(x =&amp;gt; x.StartsWith("--ConnectionString", StringComparison.OrdinalIgnoreCase));

            connectionString = connectionString.Substring(connectionString.IndexOf("=") + 1).Replace(@"""", string.Empty);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;DbUp uses a fluent API. We need to tell it about our folders, the type of script each folder is, and the order we want to run scripts in. If you use the Scripts Embedded In Assembly option with a StartsWith search, you need to supply the full NameSpace in your search.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            var upgradeEngineBuilder = DeployChanges.To
                .SqlDatabase(connectionString, null)
                .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), x =&amp;gt; x.StartsWith("DbUpLeaseExtract.BeforeDeploymentScripts."), new SqlScriptOptions { ScriptType = ScriptType.RunAlways, RunGroupOrder = 0 })
                .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), x =&amp;gt; x.StartsWith("DbUpLeaseExtract.DeploymentScripts"), new SqlScriptOptions { ScriptType = ScriptType.RunOnce, RunGroupOrder = 1 })
                .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), x =&amp;gt; x.StartsWith("DbUpLeaseExtract.PostDeploymentScripts."), new SqlScriptOptions { ScriptType = ScriptType.RunAlways, RunGroupOrder = 2 })
                .WithTransactionPerScript()
                .LogToConsole();

            var upgrader = upgradeEngineBuilder.Build();

            Console.WriteLine("Is upgrade required: " + upgrader.IsUpgradeRequired());
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The upgrader has been built, and it is ready to run. This section is where we inject the check for the upgrade report parameter. If that parameter is set, do not run the upgrade. Instead, generate a report for GitHub Actions to upload to Azure Blob storage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;            if (args.Any(a =&amp;gt; a.StartsWith("--PreviewReportPath", StringComparison.InvariantCultureIgnoreCase)))
            {
                // Generate a preview file so GitHub Actions can generate an artifact for approvals
                var report = args.FirstOrDefault(x =&amp;gt; x.StartsWith("--PreviewReportPath", StringComparison.OrdinalIgnoreCase));
                report = report.Substring(report.IndexOf("=") + 1).Replace(@"""", string.Empty);

                var fullReportPath = Path.Combine(report, "UpgradeReport.html");

                Console.WriteLine($"Generating the report at {fullReportPath}");

                upgrader.GenerateUpgradeHtmlReport(fullReportPath);
            }
            else
            {
                var result = upgrader.PerformUpgrade();

                // Display the result
                if (result.Successful)
                {
                    Console.ForegroundColor = ConsoleColor.Green;
                    Console.WriteLine("Success!");
                }
                else
                {
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine(result.Error);
                    Console.WriteLine("Failed!");
                }
            }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we put it all together, it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using System;
using System.IO;
using System.Linq;
using System.Reflection;
using DbUp;
using DbUp.Engine;
using DbUp.Helpers;
using DbUp.Support;

namespace DbUpLeaseExtract
{
    class Program
    {
        static void Main(string[] args)
        {
            var connectionString = args.FirstOrDefault(x =&amp;gt; x.StartsWith("--ConnectionString", StringComparison.OrdinalIgnoreCase));

            connectionString = connectionString.Substring(connectionString.IndexOf("=") + 1).Replace(@"""", string.Empty);

            var upgradeEngineBuilder = DeployChanges.To
                .SqlDatabase(connectionString, null)
                .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), x =&amp;gt; x.StartsWith("DbUpLeaseExtract.BeforeDeploymentScripts."), new SqlScriptOptions { ScriptType = ScriptType.RunAlways, RunGroupOrder = 0 })
                .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), x =&amp;gt; x.StartsWith("DbUpLeaseExtract.DeploymentScripts"), new SqlScriptOptions { ScriptType = ScriptType.RunOnce, RunGroupOrder = 1 })
                .WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), x =&amp;gt; x.StartsWith("DbUpLeaseExtract.PostDeploymentScripts."), new SqlScriptOptions { ScriptType = ScriptType.RunAlways, RunGroupOrder = 2 })
                .WithTransactionPerScript()
                .LogToConsole();

            var upgrader = upgradeEngineBuilder.Build();

            Console.WriteLine("Is upgrade required: " + upgrader.IsUpgradeRequired());

            if (args.Any(a =&amp;gt; a.StartsWith("--PreviewReportPath", StringComparison.InvariantCultureIgnoreCase)))
            {
                // Generate a preview file so GitHub Actions can generate an artifact for approvals
                var report = args.FirstOrDefault(x =&amp;gt; x.StartsWith("--PreviewReportPath", StringComparison.OrdinalIgnoreCase));
                report = report.Substring(report.IndexOf("=") + 1).Replace(@"""", string.Empty);

                var fullReportPath = Path.Combine(report, "UpgradeReport.html");

                Console.WriteLine($"Generating the report at {fullReportPath}");

                upgrader.GenerateUpgradeHtmlReport(fullReportPath);
            }
            else
            {
                var result = upgrader.PerformUpgrade();

                // Display the result
                if (result.Successful)
                {
                    Console.ForegroundColor = ConsoleColor.Green;
                    Console.WriteLine("Success!");
                }
                else
                {
                    Console.ForegroundColor = ConsoleColor.Red;
                    Console.WriteLine(result.Error);
                    Console.WriteLine("Failed!");
                }
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  GitHub Actions Configuration
&lt;/h2&gt;

&lt;p&gt;We have a couple of different workflows. One runs on Pull Request to generate the HTML Report and another on merge to run the actual database scripts against the target Azure SQL instance.&lt;/p&gt;

&lt;p&gt;First we have this to ensure it only runs on pull requests for changes within the db-deploy directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Create DB Delta Report
on:
  pull_request:
    branches:
      - develop
    paths:
      - 'db-deploy/**'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we checkout the code and lint using &lt;a href="https://github.com/github/super-linter" rel="noopener noreferrer"&gt;super-linter&lt;/a&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  db-delta-report:
    name: db-delta-report
    runs-on: ubuntu-latest

    steps:

      - name: Checkout
        uses: actions/checkout@master

      - name: Lint Code Base
        uses: github/super-linter@master
        env:
          GITHUB_TOKEN: ${{ secrets.GH_TOKEN }}
          VALIDATE_ALL_CODEBASE: true
          VALIDATE_MD: true
          VALIDATE_CSHARP: true
          VALIDATE_SQL: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we setup .NET core and build the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Setup .NET Core
        uses: actions/setup-dotnet@main
        with:
          dotnet-version: '5.0.x'

      - name: Cache Packages
        uses: actions/cache@v2
        with:
          path: ~/.nuget/packages
          key: ${{ runner.os }}-nuget-${{ hashFiles('**/packages.lock.json') }}
          restore-keys: |
            ${{ runner.os }}-nuget

      - name: Restore dependencies 
        working-directory: db-deploy/lease_extract
        run: dotnet restore

      - name: Build Console App
        working-directory: db-deploy/lease_extract
        run: dotnet publish --no-restore --output DbUpLeaseExtract
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This next step calls a PowerShell script that runs the console app. We utilise &lt;a href="https://docs.github.com/en/actions/reference/encrypted-secrets" rel="noopener noreferrer"&gt;encrypted secrets&lt;/a&gt; to pass in the database connection string:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Create DB Delta Report
        env: 
          LEASE_EXTRACT_DB_CONNECTION_STRING_DEV: ${{ secrets.LEASE_EXTRACT_DB_CONNECTION_STRING_DEV }}
        run: ./db-deploy/scripts/db-delta-report-dev.ps1
        shell: pwsh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the PowerShell script it is calling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$packagePath = "db-deploy/lease_extract/DbUpLeaseExtract"
$connectionString = $Env:LEASE_EXTRACT_DB_CONNECTION_STRING_DEV
$reportPath = "db-deploy/lease_extract/DbUpLeaseExtract"
$dllToRun = "$packagePath/DbUpLeaseExtract.dll"
$generatedReport = "$reportPath/UpgradeReport.html"

if ((test-path $reportPath) -eq $false){
    New-Item $reportPath -ItemType "directory"
}

dotnet $dllToRun --ConnectionString="$connectionString" --PreviewReportPath="$reportPath"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Currently the GitHub API has no way of attaching a file to a PR comments, so as a workaround I have decided to upload the HTML report to Azure Blob storage and link to it in the PR comment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Upload DB Delta Report to Azure Blob
        uses: azure/powershell@v1
        with:
          inlineScript: |
            az storage blob upload --account-name storageaccountname --container-name '$web' --file "db-deploy/lease_extract/DbUpLeaseExtract/UpgradeReport.html" --name UpgradeReport.html
          azPSVersion: "latest"

      - name: Comment on PR
        uses: unsplash/comment-on-pr@master
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          msg: "Please review the [DB Delta Report](https://storageaccountname.z33.web.core.windows.net/)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this has been reviewed by our DBA's and the pull request is merged. A different workflow is triggered:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: DB Upgrade
on:
  push:
    branches:
      - develop
    paths:
      - 'db-deploy/**'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of running the HTML report, its runs an upgrade on the database by calling a different script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Deploy DB Upgrade
        env: 
          LEASE_EXTRACT_DB_CONNECTION_STRING_DEV: ${{ secrets.LEASE_EXTRACT_DB_CONNECTION_STRING_DEV }}
        run: ./db-deploy/scripts/db-upgrade-dev.ps1
        shell: pwsh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only thing different about this script is that the &lt;strong&gt;PreviewReportPath&lt;/strong&gt; switch is missing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$packagePath = "db-deploy/lease_extract/DbUpLeaseExtract"
$connectionString = $Env:LEASE_EXTRACT_DB_CONNECTION_STRING_DEV
$reportPath = "db-deploy/lease_extract/DbUpLeaseExtract"
$dllToRun = "$packagePath/DbUpLeaseExtract.dll"
$generatedReport = "$reportPath/UpgradeReport.html"

if ((test-path $reportPath) -eq $false){
    New-Item $reportPath -ItemType "directory"
}

dotnet $dllToRun --ConnectionString="$connectionString"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As a final step, we track all of our deployments in &lt;a href="https://codeclimate.com/velocity/" rel="noopener noreferrer"&gt;Code Climate Velocity&lt;/a&gt; - If you aren't using it yet, I highly recommend you check it out:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Send Deployment to Code Climate
        run: curl -d "token=${{ secrets.VELOCITY_DEPLOYMENT_TOKEN }}" -d "revision=${GITHUB_SHA}" -d "repository_url=${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}" -d "branch=develop" -d "environment=db-dev" -d "version=${GITHUB_RUN_NUMBER}" https://velocity.codeclimate.com/deploys
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This same convention would be ran for dev, QA &amp;amp; prod etc with branches aligned to each environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;DbUp has really helped FirstPort create a robust deployment pipeline for databases. Now DBA's (and others) can review changes via GitHub Actions before they are deployed. Having the ability to review the changes should help build trust in the process and help speed up the adoption.&lt;/p&gt;

&lt;p&gt;In this post I demonstrated one technique to achieving automated database deployments using GitHub Actions. There are plenty of other solutions, and if you are using Entity Framework or similar these tools have migration support built in, but the core approach will be the same.&lt;/p&gt;

&lt;p&gt;I hope I could help you learn something new today, and share how we do things here at &lt;a href="https://www.firstport.co.uk/" rel="noopener noreferrer"&gt;FirstPort&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Any questions, get in touch on &lt;a href="https://twitter.com/GhostInTheWire5" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;&lt;iframe class="tweet-embed" id="tweet-1190197872404377600-86" src="https://platform.twitter.com/embed/Tweet.html?id=1190197872404377600"&gt;
&lt;/iframe&gt;

  // Detect dark theme
  var iframe = document.getElementById('tweet-1190197872404377600-86');
  if (document.body.className.includes('dark-theme')) {
    iframe.src = "https://platform.twitter.com/embed/Tweet.html?id=1190197872404377600&amp;amp;theme=dark"
  }



&lt;/p&gt;

</description>
      <category>github</category>
      <category>terraform</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
