DEV Community

Cover image for Solved: How Often Do You Write Pester tests?
Darian Vance
Darian Vance

Posted on • Originally published at wp.me

Solved: How Often Do You Write Pester tests?

🚀 Executive Summary

TL;DR: Many IT organizations struggle with inconsistent Pester testing, leading to production failures and slow deployments. This guide advocates for integrating Pester tests consistently through Test-Driven Development for new code, retroactive characterization tests for legacy systems, and automated CI/CD pipeline integration to ensure code quality and reliability.

🎯 Key Takeaways

  • Implement Test-Driven Development (TDD) with Pester by writing failing tests first, then minimal code to pass them, and finally refactoring, ensuring every code piece has a purpose and test.
  • Apply retroactive testing to legacy code using characterization tests to document existing behavior and Pester’s Mock cmdlet to isolate dependencies, enabling safe, incremental refactoring.
  • Integrate Pester tests into CI/CD pipelines (e.g., Azure DevOps, GitHub Actions) for automated validation, early feedback, consistent environments, and deployment gates, preventing faulty code from reaching production.

Optimize your PowerShell scripts and infrastructure configurations by adopting consistent Pester testing practices. This guide explores strategies from Test-Driven Development to CI/CD integration, helping IT professionals ensure code quality and reliability.

The Silent Threat: When Pester Tests Are an Afterthought

In the fast-paced world of IT operations and infrastructure as code, the reliability of our PowerShell scripts and configurations is paramount. Yet, many organizations struggle with a consistent approach to testing. The question, “How often do you write Pester tests?”, often reveals a spectrum from diligent automation to reactive firefighting.

Symptoms of Under-Tested Code

Ignoring or inconsistently applying Pester tests can lead to a host of operational headaches and inefficiencies. Do any of these sound familiar?

  • Unexpected Production Failures: A seemingly minor change to a script or configuration unexpectedly breaks a critical service, often discovered during business hours.
  • “Works On My Machine” Syndrome: Scripts function perfectly in development environments but fail unpredictably in staging or production due to subtle environmental differences or unhandled dependencies.
  • Fear of Refactoring: Teams become hesitant to improve or refactor existing scripts, fearing that any change might introduce regressions in untested areas.
  • Prolonged Debugging Sessions: Simple issues balloon into hours of debugging because there’s no clear, automated way to pinpoint the exact failing component.
  • Inefficient Manual Testing: Reliance on manual testing, which is time-consuming, prone to human error, and rarely covers all edge cases.
  • Lack of Confidence in Deployments: Every deployment feels like a gamble, leading to slower release cycles and increased stress for the team.

These symptoms point to a fundamental lack of automated assurance. Thankfully, several proven strategies can integrate Pester testing into your workflow effectively.

Solution 1: Embracing Test-Driven Development (TDD) with Pester

Test-Driven Development (TDD) flips the traditional coding process on its head. Instead of writing code and then testing it, you write the tests first, watch them fail, then write just enough code to make them pass, and finally refactor. This iterative cycle drives design and ensures every piece of code has a purpose and a corresponding test.

How TDD Works with Pester

  • Write a Failing Test: Before writing any functional code, define a desired behavior and write a Pester test that asserts this behavior. This test should fail initially because the functionality doesn’t exist yet.
  • Write Minimal Code: Write just enough functional code to make the failing test pass. Focus purely on meeting the test’s requirements, not on elegance or completeness.
  • Refactor: Once the test passes, refactor your functional code to improve its design, readability, and efficiency, all while ensuring the test continues to pass. This provides a safety net against regressions.
  • Repeat: Continue this cycle for each new feature or piece of functionality.

Example: TDD for a Simple PowerShell Function

Let’s say we need a function, Get-DiskSpaceReport, that returns disk space information in GB for local drives.

Step 1: Write a Failing Test

Create a .Tests.ps1 file (e.g., Get-DiskSpaceReport.Tests.ps1) and add a test for a basic scenario. We expect it to return an array of objects with ‘DriveLetter’ and ‘FreeSpaceGB’ properties.

Describe "Get-DiskSpaceReport" {
    It "Should return objects with DriveLetter and FreeSpaceGB properties" {
        # Temporarily mock Get-WmiObject to avoid actual WMI calls during initial development
        Mock Get-WmiObject {
            [PSCustomObject]@{
                DriveLetter = "C:";
                FreeSpace = 107374182400; # 100 GB in bytes
                Size = 536870912000;      # 500 GB in bytes
            }
        }

        $report = Get-DiskSpaceReport
        $report | Should Not BeNullOrEmpty()
        $report[0] | Should HaveProperty 'DriveLetter'
        $report[0] | Should HaveProperty 'FreeSpaceGB'
        $report[0].FreeSpaceGB | Should BeGreaterThan 0
    }
}
Enter fullscreen mode Exit fullscreen mode

Running Invoke-Pester -Path .\Get-DiskSpaceReport.Tests.ps1 at this point would fail because Get-DiskSpaceReport doesn’t exist.

Step 2: Write Minimal Code to Pass the Test

Create Get-DiskSpaceReport.ps1 and add the bare minimum to satisfy the test:

function Get-DiskSpaceReport {
    # This will be mocked by Pester, so a simple return works for the test
    # but in reality, Get-WmiObject will be called.
    Get-WmiObject -Class Win32_LogicalDisk -Filter "DriveType=3" | ForEach-Object {
        [PSCustomObject]@{
            DriveLetter = $_.DeviceID;
            FreeSpaceGB = [math]::Round($_.FreeSpace / 1GB, 2);
            TotalSpaceGB = [math]::Round($_.Size / 1GB, 2);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Now, run the tests again. They should pass.

Step 3: Refactor

With the test passing, you can now refactor Get-DiskSpaceReport to improve its robustness, error handling, or performance, knowing your test will catch any regressions.

Solution 2: Retroactive Testing for Legacy Code

It’s a common scenario: you inherit a critical PowerShell script or module with years of undocumented changes and zero tests. TDD isn’t an option for *existing* code, but you can still introduce Pester tests to improve its reliability and facilitate future modifications.

Strategy: Characterization Tests and Incremental Refactoring

  • Characterization Tests: Start by writing tests that “characterize” the current behavior of the legacy code. These tests document how the code *actually* behaves, even if that behavior is undesirable. This creates a safety net.
  • Identify Critical Paths: Don’t try to test everything at once. Focus on the most critical functions, cmdlets, or execution paths that would cause significant business impact if they failed.
  • Isolate Dependencies: Legacy code often has many side effects or external dependencies. Use Pester’s Mock cmdlet extensively to isolate the code under test from external systems (e.g., file system, registry, network calls, other cmdlets).
  • Small, Incremental Changes: Once characterization tests are in place, make very small, surgical changes to refactor or improve the code. Run the tests after each change to ensure nothing broke.

Example: Retroactive Testing for a Legacy Firewall Script

Imagine a script named Set-LegacyFirewallRule.ps1 that manages a specific firewall rule based on dynamic inputs. It’s complex and has direct calls to netsh advfirewall. We want to ensure it correctly adds a rule.

# Original legacy script snippet (Set-LegacyFirewallRule.ps1)
function Set-LegacyFirewallRule {
    param (
        [string]$RuleName,
        [int]$Port,
        [string]$Protocol = "TCP",
        [string]$Direction = "Inbound",
        [string]$Action = "Allow"
    )

    $command = "netsh advfirewall firewall add rule name=`"$RuleName`" dir=$Direction action=$Action protocol=$Protocol localport=$Port"
    Write-Host "Executing: $command"
    Invoke-Expression $command # Highly dangerous in production, but we're testing legacy code
}
Enter fullscreen mode Exit fullscreen mode

Now, let’s write a characterization test:

# Set-LegacyFirewallRule.Tests.ps1
. .\Set-LegacyFirewallRule.ps1 # Dot-source the function for testing

Describe "Set-LegacyFirewallRule" {
    Context "Adding a new inbound TCP rule" {
        It "Should construct and execute the correct netsh command" {
            $expectedCommand = "netsh advfirewall firewall add rule name=`"TestRule`" dir=Inbound action=Allow protocol=TCP localport=8080"

            # Mock Invoke-Expression to capture the command it would execute
            # and prevent actual changes to the system.
            Mock Invoke-Expression {
                param($Command)
                Set-Variable -Scope Script -Name CapturedCommand -Value $Command
            } -ModuleName Set-LegacyFirewallRule # Mock only for this module/script

            Set-LegacyFirewallRule -RuleName "TestRule" -Port 8080 -Protocol "TCP" -Direction "Inbound" -Action "Allow"

            $Script:CapturedCommand | Should Be $expectedCommand
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

This test verifies that the *dangerous* Invoke-Expression receives the *expected* string without actually running it. This gives you confidence that if you refactor Set-LegacyFirewallRule to use New-NetFirewallRule, your characterization test will fail if the new implementation doesn’t produce the same logical outcome, or in this case, the same command string. You can then update the test to reflect the new desired behavior.

TDD vs. Retroactive Testing: A Comparison

Feature Test-Driven Development (TDD) Retroactive Testing (Legacy Code)
Project Phase New feature development, greenfield projects. Existing, untested codebases; maintenance & refactoring.
Primary Goal Drive code design, prevent bugs from the start, ensure correctness. Establish a safety net, prevent regressions during changes, enable future refactoring.
Initial Investment Higher initial thought on test design; integrates naturally into coding. Significant effort to understand existing behavior, isolate code, and write characterization tests.
Risk Reduction Prevents bugs by design; high confidence in new features. Reduces risk of breaking existing functionality; confidence in changes to legacy code.
Code Quality Impact Leads to modular, testable, and maintainable code from inception. Exposes design flaws, identifies areas for improvement, makes future refactoring safer.
Typical Use Writing new PowerShell modules, functions, or DSC configurations. Adding tests to existing operational scripts, complex legacy functions.

Solution 3: Integrated CI/CD Pipeline Testing

Writing Pester tests is only half the battle; the other half is ensuring they are run consistently and automatically. Integrating Pester tests into your Continuous Integration/Continuous Deployment (CI/CD) pipeline is crucial for maintaining code quality and catching issues early.

Benefits of CI/CD Integration

  • Automated Validation: Every code change triggers automated test execution, eliminating the need for manual checks.
  • Early Feedback: Developers receive immediate feedback if their changes break existing functionality or introduce new bugs.
  • Consistent Environment: Tests run in a standardized, controlled environment, reducing “works on my machine” issues.
  • Deployment Gates: Pester test failures can block deployments, preventing faulty code from reaching production.
  • Traceability and Reporting: CI/CD tools can publish test results, providing clear visibility into test coverage and pass/fail rates over time.

Example: Integrating Pester into Azure DevOps Pipeline

Here’s a snippet from an Azure DevOps YAML pipeline that runs Pester tests and publishes the results.

# azure-pipelines.yml
trigger:
- main

pool:
  vmImage: 'windows-latest'

steps:
- checkout: self
  submodules: true

- task: PowerShell@2
  displayName: 'Run Pester Tests'
  inputs:
    targetType: 'filePath'
    filePath: '$(System.DefaultWorkingDirectory)/Run-AllPesterTests.ps1' # Script to discover and run all Pester tests
    arguments: '-OutputPath "$(Build.SourcesDirectory)/TestResults.xml"' # Output path for test results
    pwsh: true # Use PowerShell Core for consistency

- task: PublishTestResults@2
  displayName: 'Publish Pester Test Results'
  inputs:
    testResultsFormat: 'NUnit' # Pester can output in NUnit XML format
    testResultsFiles: '$(Build.SourcesDirectory)/TestResults.xml'
    mergeTestResults: true
    failTaskOnFailedTests: true
    testRunTitle: 'Pester Tests for $(Build.DefinitionName)'
Enter fullscreen mode Exit fullscreen mode

The Run-AllPesterTests.ps1 script would typically look something like this:

# Run-AllPesterTests.ps1
param (
    [string]$OutputPath = "$PSScriptRoot/TestResults.xml"
)

# Ensure Pester module is available
if (-not (Get-Module -ListAvailable -Name Pester)) {
    Write-Host "Pester module not found. Installing..."
    Install-Module -Name Pester -Force -Scope CurrentUser
}

# Find all .Tests.ps1 files recursively within the current directory
$testFiles = Get-ChildItem -Path $PSScriptRoot -Filter "*.Tests.ps1" -Recurse | Select-Object -ExpandProperty FullName

if ($testFiles) {
    Write-Host "Found Pester test files: $($testFiles.Count)"

    # Run Pester tests and export results to NUnit XML format
    Invoke-Pester -Path $testFiles -CI -OutputFormat NUnitXml -OutputFile $OutputPath

    if ($LASTEXITCODE -ne 0) {
        Write-Error "Pester tests failed."
        exit 1 # Indicate failure to the CI/CD pipeline
    } else {
        Write-Host "All Pester tests passed."
    }
} else {
    Write-Warning "No Pester test files (*.Tests.ps1) found."
}
Enter fullscreen mode Exit fullscreen mode

Example: Integrating Pester into GitHub Actions

Similar integration can be achieved with GitHub Actions:

# .github/workflows/main.yml
name: CI with Pester Tests

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:
  build-and-test:
    runs-on: windows-latest # Or ubuntu-latest for cross-platform PowerShell

    steps:
    - name: Checkout code
      uses: actions/checkout@v3

    - name: Setup PowerShell
      uses: actions/setup-powershell@v2

    - name: Install Pester
      shell: pwsh
      run: Install-Module -Name Pester -Force -Scope CurrentUser

    - name: Run Pester Tests
      shell: pwsh
      run: |
        $testResultPath = "$($env:GITHUB_WORKSPACE)/TestResults.xml"
        Invoke-Pester -Path "$($env:GITHUB_WORKSPACE)/*.Tests.ps1" -CI -OutputFormat NUnitXml -OutputFile $testResultPath -PassThru
        if ($LASTEXITCODE -ne 0) {
            Write-Error "Pester tests failed."
            exit 1
        }

    - name: Publish Test Results
      uses: actions/upload-artifact@v3
      if: always() # Upload even if tests fail
      with:
        name: pester-test-results
        path: ${{ github.workspace }}/TestResults.xml
Enter fullscreen mode Exit fullscreen mode

Conclusion: Pester Testing as a Core Practice

The question isn’t “How often do you write Pester tests?”, but rather “How often do you *not* want to deal with preventable outages, manual debugging, and fear of change?” Consistent Pester testing, whether through TDD for new development, characterization tests for legacy code, or automated CI/CD integration, transforms your PowerShell scripts from potential liabilities into robust, reliable assets. Make Pester testing an integral part of your development and operations workflow, and reap the benefits of higher quality, increased confidence, and faster delivery.


Darian Vance

👉 Read the original article on TechResolve.blog


☕ Support my work

If this article helped you, you can buy me a coffee:

👉 https://buymeacoffee.com/darianvance

Top comments (0)