Azure Networking from Zero to Enterprise — Part 2: Bicep & Azure Verified Modules
This is Part 2 of the series. In Part 1, we covered VNets, Subnets, NSGs, Route Tables, DNS, and Private Endpoints. Now we're picking up the tool we'll use to deploy all of it: Bicep.
What you'll learn in this part:
- What Bicep is and why it exists
- Bicep vs ARM templates vs Terraform — an honest comparison
- Azure Verified Modules (AVM) — Microsoft's official module library
- Writing and deploying your first VNet with subnets and NSGs in Bicep
- Structuring a Bicep project for a multi-part series
What is Bicep?
Bicep is Azure's domain-specific language (DSL) for deploying Azure resources. It compiles down to ARM templates (JSON), but you never have to write or read that JSON yourself.
If you've ever opened a 500-line ARM template and felt your soul leave your body — Bicep is the fix.
// This is Bicep. Clean, readable, no curly-brace hell.
resource vnet 'Microsoft.Network/virtualNetworks@2024-01-01' = {
name: 'vnet-hub'
location: 'eastus2'
properties: {
addressSpace: {
addressPrefixes: ['10.10.0.0/16']
}
}
}
The equivalent ARM JSON is roughly 3x longer. Bicep gives you:
- Type safety — intellisense and compile-time validation in VS Code
- Modules — split your deployment into reusable pieces
- No state file — unlike Terraform, Bicep is stateless. Azure IS the state.
- Day-zero support — new Azure features are available in Bicep immediately. Terraform providers can lag weeks or months.
Installing Bicep
Bicep ships with the Azure CLI. If you have az installed:
az bicep install
az bicep upgrade
az bicep version
# Should show v0.30+ (as of 2026)
Install the Bicep VS Code extension — it gives you autocomplete, error highlighting, and resource snippet generation. It's essential.
Bicep vs Terraform — An Honest Take
This is a hot topic, so let me be direct:
| Factor | Bicep | Terraform |
|---|---|---|
| Multi-cloud | Azure only | Multi-cloud |
| State management | None (Azure is the state) | State file required (local, S3, blob, etc.) |
| New Azure feature support | Immediate (same day as ARM) | Delayed (depends on provider release) |
| Learning curve | Lower (if you know Azure) | Moderate |
| Module ecosystem | AVM (growing fast) | Terraform Registry (massive) |
| Drift detection | What-if deployments | terraform plan |
| Language | Declarative DSL | HCL (declarative DSL) |
| Tooling | VS Code extension, Azure CLI | VS Code extension, Terraform CLI |
When to use Bicep:
- You're Azure-only (most enterprises with Microsoft EA agreements)
- You want no state file headaches (no backend config, no state locking, no state corruption)
- You need day-zero support for new Azure features
- Your team already knows Azure and ARM concepts
When to use Terraform:
- You deploy to multiple clouds (AWS + Azure + GCP)
- You need to manage non-Azure resources (GitHub repos, Datadog monitors, PagerDuty, etc.)
- Your team already knows HCL
My take:
For Azure-native networking (which is what this series is about), Bicep wins. No state file means fewer things to break, and you get new networking features the same day Microsoft releases them. I've seen Terraform's AzureRM provider lag behind on features like Private Endpoint subnet-level policies, DNS Private Resolver, and Virtual Network Manager — features we'll use in this series.
Azure Verified Modules (AVM)
Here's where it gets interesting. Microsoft maintains an official library of production-ready Bicep modules called Azure Verified Modules.
Why AVM matters:
Without AVM, deploying a VNet with subnets, NSGs, and route tables means writing 200+ lines of Bicep. With AVM, it's ~30 lines.
AVM modules are:
- Microsoft-maintained and regularly updated
- Well-tested with CI/CD pipelines
- Best-practice by default — they include diagnostic settings, locks, RBAC, tags
- Published to the Bicep public registry — you reference them with a one-liner
Using an AVM module:
// Reference the AVM Virtual Network module from the public registry
module vnet 'br/public:avm/res/network/virtual-network:0.5.2' = {
name: 'deploy-vnet-hub'
params: {
name: 'vnet-hub'
location: 'eastus2'
addressPrefixes: ['10.10.0.0/16']
subnets: [
{
name: 'AzureFirewallSubnet'
addressPrefix: '10.10.1.0/26'
}
{
name: 'AzureBastionSubnet'
addressPrefix: '10.10.2.0/26'
}
{
name: 'snet-shared-services'
addressPrefix: '10.10.4.0/24'
}
]
}
}
That's it. The module handles the rest — creating the VNet, subnets, applying default diagnostic settings, supporting optional parameters for peering, DNS servers, encryption, and more.
Finding AVM modules:
- AVM website: https://aka.ms/avm — browse all available modules
-
Bicep Registry:
br/public:avm/res/<provider>/<resource-type>:<version> - GitHub: https://github.com/Azure/bicep-registry-modules — source code, examples, and docs
Key networking modules we'll use in this series:
| Module | Registry Path | Used In |
|---|---|---|
| Virtual Network | br/public:avm/res/network/virtual-network |
Part 2, 3 |
| NSG | br/public:avm/res/network/network-security-group |
Part 2, 3 |
| Route Table | br/public:avm/res/network/route-table |
Part 3, 4 |
| VNet Peering | (part of VNet module) | Part 3 |
| Azure Firewall | br/public:avm/res/network/azure-firewall |
Part 4 |
| ExpressRoute Gateway | br/public:avm/res/network/virtual-network-gateway |
Part 5 |
| Bastion | br/public:avm/res/network/bastion-host |
Part 6 |
Hands-On: Your First Bicep Deployment
Let's deploy the hub VNet from Part 1's design. We'll do it two ways: raw Bicep and then with AVM.
Option A: Raw Bicep
Create a file called main.bicep:
// ============================================
// Part 2: Hub VNet with Subnets and NSGs
// ============================================
targetScope = 'resourceGroup'
@description('Azure region for all resources')
param location string = resourceGroup().location
@description('Environment name used for naming')
param environment string = 'dev'
// --- NSG for shared services subnet ---
resource nsgSharedServices 'Microsoft.Network/networkSecurityGroups@2024-01-01' = {
name: 'nsg-${environment}-shared-services'
location: location
properties: {
securityRules: [
{
name: 'Allow-DNS-Inbound'
properties: {
priority: 100
direction: 'Inbound'
access: 'Allow'
protocol: '*'
sourcePortRange: '*'
destinationPortRange: '53'
sourceAddressPrefix: '10.0.0.0/8'
destinationAddressPrefix: '*'
}
}
]
}
}
// --- NSG for management subnet ---
resource nsgManagement 'Microsoft.Network/networkSecurityGroups@2024-01-01' = {
name: 'nsg-${environment}-management'
location: location
properties: {
securityRules: [
{
name: 'Deny-All-Inbound'
properties: {
priority: 4096
direction: 'Inbound'
access: 'Deny'
protocol: '*'
sourcePortRange: '*'
destinationPortRange: '*'
sourceAddressPrefix: '*'
destinationAddressPrefix: '*'
}
}
]
}
}
// --- Hub VNet ---
resource vnetHub 'Microsoft.Network/virtualNetworks@2024-01-01' = {
name: 'vnet-${environment}-hub'
location: location
properties: {
addressSpace: {
addressPrefixes: ['10.10.0.0/16']
}
subnets: [
{
name: 'GatewaySubnet'
properties: {
addressPrefix: '10.10.0.0/26'
}
}
{
name: 'AzureFirewallSubnet'
properties: {
addressPrefix: '10.10.1.0/26'
}
}
{
name: 'AzureBastionSubnet'
properties: {
addressPrefix: '10.10.2.0/26'
}
}
{
name: 'snet-management'
properties: {
addressPrefix: '10.10.3.0/24'
networkSecurityGroup: {
id: nsgManagement.id
}
}
}
{
name: 'snet-shared-services'
properties: {
addressPrefix: '10.10.4.0/24'
networkSecurityGroup: {
id: nsgSharedServices.id
}
}
}
]
}
}
// --- Outputs ---
output vnetId string = vnetHub.id
output vnetName string = vnetHub.name
output subnetIds array = [for subnet in vnetHub.properties.subnets: subnet.id]
Deploy it:
# Create a resource group
az group create --name rg-networking-dev --location eastus2
# Deploy
az deployment group create \
--resource-group rg-networking-dev \
--template-file main.bicep \
--parameters environment=dev
# What-if (preview changes without deploying)
az deployment group what-if \
--resource-group rg-networking-dev \
--template-file main.bicep
Option B: The AVM Way (Recommended)
Same result, less code, more features out of the box:
// ============================================
// Part 2: Hub VNet using AVM Module
// ============================================
targetScope = 'resourceGroup'
param location string = resourceGroup().location
param environment string = 'dev'
// --- NSGs ---
module nsgSharedServices 'br/public:avm/res/network/network-security-group:0.5.0' = {
name: 'deploy-nsg-shared-services'
params: {
name: 'nsg-${environment}-shared-services'
location: location
securityRules: [
{
name: 'Allow-DNS-Inbound'
properties: {
priority: 100
direction: 'Inbound'
access: 'Allow'
protocol: '*'
sourcePortRange: '*'
destinationPortRange: '53'
sourceAddressPrefix: '10.0.0.0/8'
destinationAddressPrefix: '*'
}
}
]
}
}
module nsgManagement 'br/public:avm/res/network/network-security-group:0.5.0' = {
name: 'deploy-nsg-management'
params: {
name: 'nsg-${environment}-management'
location: location
}
}
// --- Hub VNet ---
module vnetHub 'br/public:avm/res/network/virtual-network:0.5.2' = {
name: 'deploy-vnet-hub'
params: {
name: 'vnet-${environment}-hub'
location: location
addressPrefixes: ['10.10.0.0/16']
subnets: [
{
name: 'GatewaySubnet'
addressPrefix: '10.10.0.0/26'
}
{
name: 'AzureFirewallSubnet'
addressPrefix: '10.10.1.0/26'
}
{
name: 'AzureBastionSubnet'
addressPrefix: '10.10.2.0/26'
}
{
name: 'snet-management'
addressPrefix: '10.10.3.0/24'
networkSecurityGroupResourceId: nsgManagement.outputs.resourceId
}
{
name: 'snet-shared-services'
addressPrefix: '10.10.4.0/24'
networkSecurityGroupResourceId: nsgSharedServices.outputs.resourceId
}
]
}
}
output vnetId string = vnetHub.outputs.resourceId
output vnetName string = vnetHub.outputs.name
What's different with AVM?
Notice we didn't configure:
- Diagnostic settings
- Resource locks
- Tags
- RBAC
The AVM module supports all of these as optional parameters. When you need them, you add one line. When you don't, the module does the right thing by default.
Compare: the raw Bicep version is 100+ lines and only covers the basics. The AVM version is ~60 lines and gives you access to every feature the VNet resource supports — diagnostic settings, DNS servers, peering, encryption, flow timeout — all as optional params.
Project Structure for This Series
Here's how I organize the companion repo:
azure-networking-series/
├── part2-bicep/
│ ├── main.bicep # Hub VNet deployment
│ ├── main.bicepparam # Parameter file
│ └── README.md
├── part3-hub-spoke/
│ ├── main.bicep # Hub + Spoke VNets + Peering
│ └── ...
├── part4-firewall/
│ ├── main.bicep # NVA/Firewall + UDRs
│ └── ...
├── full-architecture/
│ ├── main.bicep # Complete deployment
│ ├── modules/
│ │ ├── hub.bicep
│ │ ├── spoke.bicep
│ │ └── connectivity.bicep
│ └── README.md
└── README.md # Series overview + architecture diagram
Each part's folder is self-contained — you can az deployment group create from any part folder and get a working deployment.
Bicep Tips That'll Save You Time
1. Always use what-if before deploying
az deployment group what-if --resource-group rg-networking-dev --template-file main.bicep
This shows you exactly what will be created, modified, or deleted — without touching anything. Use it every single time.
2. Use .bicepparam files instead of inline parameters
// main.bicepparam
using './main.bicep'
param location = 'eastus2'
param environment = 'prod'
az deployment group create --resource-group rg-networking-prod --parameters main.bicepparam
Cleaner than --parameters environment=prod location=eastus2 and you can commit them to Git per environment.
3. Use the VS Code extension's resource snippet
Type res- in VS Code and you'll get autocomplete for every Azure resource type. It generates the full resource skeleton with the latest API version.
4. Leverage dependsOn only when Bicep can't infer it
Bicep automatically detects dependencies when you reference one resource in another (like we did with nsgManagement.id in the subnet). You only need explicit dependsOn for implicit dependencies.
5. Use @description decorators
@description('The environment name (dev, staging, prod)')
@allowed(['dev', 'staging', 'prod'])
param environment string
These show up in the Azure Portal when someone deploys your template manually. Good practice, zero effort.
Summary
| Concept | Key Takeaway |
|---|---|
| Bicep | Azure's native IaC language. Clean syntax, no state file, day-zero feature support |
| Bicep vs Terraform | Use Bicep for Azure-only. Use Terraform for multi-cloud |
| AVM | Microsoft's official module library. Production-ready, well-tested, saves you 50-70% of code |
| what-if | Always preview before deploying. Always |
| Project structure | One folder per part, self-contained, with parameter files per environment |
What's Next
Part 3 and Part 4 are live!
- Part 3: Hub/Spoke Architecture Deep Dive — VNet peering, transit routing, shared services, and why hub/spoke is the default enterprise topology — deployed with Bicep and AVM.
- Part 4: Firewalling with Azure Firewall & NVAs — Azure Firewall vs Palo Alto NVAs, routing through the firewall, SNAT/DNAT, security policies, and cost optimization.
Read the full series at routetozero.dev/blog.
Resources
About the Author
I'm a cloud network engineer specializing in Azure enterprise architectures — hub/spoke, ExpressRoute, NVAs, and infrastructure-as-code with Bicep. If you're building something similar in your org and want a second pair of eyes, feel free to reach out.
Found this useful? Follow this blog to get notified when Part 3 drops — we're building the hub/spoke topology. Drop your questions in the comments, I read every one.
Top comments (0)