DEV Community

Nathan Araújo
Nathan Araújo

Posted on

Exploring Azure Functions for Synthetic Monitoring with Playwright: A Complete Guide - Part 2

Introduction

In the first article, we explored how to build a synthetic monitoring solution with Azure Functions and Playwright. Now, we'll dive deep into the reporting mechanism - the crucial component that transforms test results into actionable insights by sending telemetry to Application Insights and storing test artifacts in Azure Blob Storage.

This article focuses on the Test Results Processing layer of our architecture, explaining how the custom Playwright reporter works, when and how it sends data to Azure services, and best practices for monitoring and troubleshooting.

The Reporting Architecture

┌─────────────────┐
│   Playwright    │
│   Test Runner   │
│                 │
└─────────────────┘
          │
          ▼
┌─────────────────┐
│  Test Results   │ ◄── Custom Reporter Implementation
│   Processing    │
└─────────────────┘
          │
┌─────────┴─────────┐
▼                   ▼
┌─────────────────┐ ┌─────────────────┐
│ Application     │ │   Azure Blob    │
│ Insights        │ │   Storage       │
│ (Telemetry)     │ │ (Artifacts)     │
└─────────────────┘ └─────────────────┘
Enter fullscreen mode Exit fullscreen mode

Understanding Playwright Reporters

Playwright reporters are plugins that process test results during and after test execution. Our custom reporter implements the Reporter interface with key lifecycle methods:

  • onTestEnd() - Called when each individual test completes
  • onEnd() - Called when the entire test run finishes

Configuring the Custom Reporter in Playwright

1. Playwright Configuration Setup

The custom reporter is integrated into the Playwright configuration file (playwright.config.ts). Here's how it's configured:

import { defineConfig } from '@playwright/test';

export default defineConfig({
  // ... other configuration options

  reporter: [
    ['html', { outputFolder: htmlReportPath, open: 'never' }],     // Built-in HTML reporter
    ['junit', { outputFile: path.join(outputPath, 'junit-report.xml') }], // JUnit XML
    ['list'],                                                     // Console list reporter
    ['./src/support/reporter/appinsights-reporter.ts']           // Our custom reporter
  ],

  // ... rest of configuration
});
Enter fullscreen mode Exit fullscreen mode

2. Reporter Loading and Instantiation

When Playwright loads the custom reporter, it follows this process:

// appinsights-reporter.ts - Export function that returns reporter instance
export default function (): Reporter {
  return new AppInsightsReporter();
}
Enter fullscreen mode Exit fullscreen mode

Key Points:

  • The file path is relative to the project root
  • The exported function is called once per test run
  • Multiple reporters can run simultaneously
  • Each reporter receives the same test events

3. Reporter Lifecycle Integration

The Playwright engine automatically calls our reporter methods:

Test Execution Flow:
┌─────────────────┐
│ Playwright CLI  │
└─────────────────┘
          │
          ▼
┌─────────────────┐
│ Load Reporters  │ ◄── Our custom reporter loaded here
└─────────────────┘
          │
          ▼
┌─────────────────┐
│   Run Tests     │
└─────────────────┘
          │
          ▼ (for each test)
┌─────────────────┐
│ onTestEnd()     │ ◄── Individual test telemetry sent
└─────────────────┘
          │
          ▼ (after all tests)
┌─────────────────┐
│ onEnd()         │ ◄── Artifacts uploaded, telemetry flushed
└─────────────────┘
Enter fullscreen mode Exit fullscreen mode

Custom Reporter Implementation Breakdown

1. Core Reporter Structure

import { Reporter, TestCase, TestResult, FullResult } from '@playwright/test/reporter';

class AppInsightsReporter implements Reporter {
  private runTimestamp: string | null = null;

  // Called for each test completion
  onTestEnd(test: TestCase, result: TestResult) {
    // Process individual test results
  }

  // Called when all tests complete
  async onEnd(result: FullResult): Promise<void> {
    // Process final results and upload artifacts
  }
}
Enter fullscreen mode Exit fullscreen mode

2. Individual Test Processing (onTestEnd)

This method handles real-time telemetry for each test:

onTestEnd(test: TestCase, result: TestResult) {
  // Generate unique timestamp for this test run
  if (!this.runTimestamp) {
    this.runTimestamp = new Date().toISOString().replace(/[:.]/g, '-');
  }

  log(`[Reporter] onTestEnd called for test: ${test.title} with status: ${result.status}`);

  // Handle retry logic - only report final attempts
  const retries = test.retries || 0;
  const isLastRetry = result.retry === retries;
  const testPassed = result.status === 'passed';

  const shouldSendTrace = testPassed || isLastRetry;
  if (!shouldSendTrace) {
    log(`[Reporter] Skipping report for intermediate retry: ${test.title} (retry ${result.retry})`);
    return;
  }

  // Send availability telemetry to Application Insights
  const duration = result.duration || 0;
  const success = result.status === 'passed';

  appInsightsClient.trackAvailability({
    name: test.title,
    success,
    duration,
    runLocation: 'Azure Function - Playwright Synthetic Monitoring',
    message: success ? 'Test passed' : `Test failed: ${result.error?.message}`,
    time: new Date(this.runTimestamp),
    id: test.id,
  });
}
Enter fullscreen mode Exit fullscreen mode

Key Features:

  1. Retry Handling: Only sends telemetry for the final retry attempt to avoid duplicate data
  2. Availability Tracking: Uses Application Insights availability telemetry for uptime monitoring
  3. Rich Context: Includes test duration, success status, and error messages
  4. Unique Identification: Each test run gets a timestamp-based identifier

3. Final Results Processing (onEnd)

This method handles artifact storage and telemetry flushing:

async onEnd(result: FullResult): Promise<void> {
  const shouldUpload = result.status === 'failed';

  log(`[Reporter] onEnd called. Test status: ${result.status}. Should upload: ${shouldUpload}`);

  // Upload artifacts only on failure
  if (shouldUpload) {
    try {
      await this.uploadTestArtifacts();
    } catch (err) {
      logError('[Reporter] Failed to zip or upload report:', err);
    }
  }

  // Ensure all telemetry is sent
  try {
    await flushTelemetry(5000);
  } catch (err) {
    logError('[AppInsights] Error during flush:', err);
  }
}
Enter fullscreen mode Exit fullscreen mode

Key Features:

  1. Conditional Upload: Only uploads artifacts when tests fail
  2. Error Handling: Graceful handling of upload failures
  3. Telemetry Flushing: Ensures all data reaches Application Insights before function ends

Application Insights Integration Deep Dive

1. Client Configuration

import * as appInsights from 'applicationinsights';

const connectionString = process.env.APPLICATIONINSIGHTS_CONNECTION_STRING;

appInsights
  .setup(connectionString)
  .setSendLiveMetrics(true)                    // Real-time monitoring
  .setDistributedTracingMode(appInsights.DistributedTracingModes.AI_AND_W3C)
  .setAutoDependencyCorrelation(true)          // Correlate related requests
  .setAutoCollectRequests(true)                // HTTP requests
  .setAutoCollectPerformance(true, true)       // Performance counters
  .setAutoCollectExceptions(true)              // Unhandled exceptions
  .setAutoCollectDependencies(true)            // External dependencies
  .setAutoCollectConsole(true)                 // Console logs
  .setUseDiskRetryCaching(true)               // Retry failed sends
  .setInternalLogging(true, true)             // Debug information
  .start();

const appInsightsClient = appInsights.defaultClient;
Enter fullscreen mode Exit fullscreen mode

2. Telemetry Types and Usage

Availability Telemetry

Perfect for synthetic monitoring as it tracks uptime and response times:

appInsightsClient.trackAvailability({
  name: 'Login Flow Test',              // Test identifier
  success: true,                        // Pass/fail status
  duration: 2543,                       // Response time in ms
  runLocation: 'Azure Function',        // Where test ran
  message: 'Test passed',              // Additional context
  time: new Date(),                    // When test ran
  id: 'unique-test-id'                 // Correlation ID
});
Enter fullscreen mode Exit fullscreen mode

3. Telemetry Flushing

Critical for Azure Functions to ensure data is sent before function terminates:

export async function flushTelemetry(timeoutMs: number = 5000): Promise<void> {
  return new Promise((resolve, reject) => {
    const timeout = setTimeout(() => {
      reject(new Error(`Telemetry flush timeout after ${timeoutMs}ms`));
    }, timeoutMs);

    appInsightsClient.flush({
      callback: (response) => {
        clearTimeout(timeout);
        if (response) {
          console.log('[AppInsights] Telemetry flushed successfully');
          resolve();
        } else {
          reject(new Error('Failed to flush telemetry'));
        }
      }
    });
  });
}
Enter fullscreen mode Exit fullscreen mode

Azure Blob Storage Integration Deep Dive

Azure Blob Storage serves as the repository for test artifacts, including HTML reports, screenshots, videos, and traces. This section explains in detail how the storage functions work and what they accomplish.

1. Storage Client Setup and Initialization

The blob storage client is initialized once and reused throughout the application lifecycle:

import { BlobServiceClient, ContainerClient } from '@azure/storage-blob';

const connectionString = process.env.AZURE_STORAGE_CONNECTION_STRING;
const containerName = process.env.BLOB_CONTAINER_NAME || 'test-artifacts';

let containerClient: ContainerClient | null = null;

if (connectionString) {
  const blobServiceClient = BlobServiceClient.fromConnectionString(connectionString);
  containerClient = blobServiceClient.getContainerClient(containerName);
} else {
  console.warn('[BlobStorage] Connection string not provided.');
}

export { containerClient };
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Connection Management: Establishes a connection to Azure Storage using the connection string
  • Container Reference: Gets a reference to the specific container where artifacts will be stored
  • Error Handling: Gracefully handles missing configuration without breaking the application
  • Singleton Pattern: Creates one client instance that's reused across the application

2. Artifact Upload Process - Detailed Breakdown

The upload process involves several critical steps, each with specific purposes:

Step 1: Report Compression Function

export async function zipReportFolder(folderPath: string, zipName: string): Promise<string> {
  const zipPath = path.join(os.tmpdir(), `${zipName}.zip`);

  await new Promise<void>((resolve, reject) => {
    const output = fs.createWriteStream(zipPath);
    const archive = archiver('zip', { zlib: { level: 9 } }); // Maximum compression

    output.on('close', () => {
      console.log(`[ZIP] Report successfully zipped at: ${zipPath}`);
      resolve();
    });

    archive.on('error', (err) => reject(err));

    archive.pipe(output);
    archive.directory(folderPath, false);  // Include all files in folder
    archive.finalize();
  });

  return zipPath;
}
Enter fullscreen mode Exit fullscreen mode

Detailed Function Analysis:

  1. File Path Generation:

    • Creates a unique zip file path in the system's temporary directory
    • Uses the provided zipName to ensure uniqueness across test runs
  2. Stream Setup:

    • createWriteStream: Creates a writable stream to the zip file
    • archiver('zip', { zlib: { level: 9 } }): Creates a zip archiver with maximum compression
    • Level 9 compression reduces file size but takes more CPU time
  3. Content Processing:

    • archive.directory(folderPath, false): Adds entire directory contents to zip
    • false parameter means don't include the parent directory in the zip structure
    • archive.finalize(): Completes the archiving process
  4. What Gets Compressed:

    • HTML test report files
    • CSS stylesheets for report formatting
    • JavaScript files for interactive features
    • Test artifacts (screenshots, videos, traces)
    • Any additional files in the report directory

Step 2: Blob Upload Function - Deep Analysis

export async function uploadFileToBlobStorage(filePath: string, blobName: string): Promise<void> {
  if (!containerClient) {
    console.warn('[BlobStorage] Container client not initialized.');
    return;
  }

  // Ensure container exists
  console.log('[BlobStorage] Checking/Creating container if not exists...');
  await containerClient.createIfNotExists();

  // Get blob client for specific file
  const blockBlobClient = containerClient.getBlockBlobClient(blobName);

  // Read and upload file
  console.log('[BlobStorage] Reading file...');
  const fileBuffer = fs.readFileSync(filePath);

  console.log('[BlobStorage] Uploading data...');
  await blockBlobClient.uploadData(fileBuffer, {
    blobHTTPHeaders: { 
      blobContentType: 'application/zip' 
    },
    metadata: {
      uploadedAt: new Date().toISOString(),
      source: 'playwright-synthetic-monitoring'
    }
  });

  console.log(`[BlobStorage] Upload completed: ${blobName}`);
}
Enter fullscreen mode Exit fullscreen mode

Detailed Function Breakdown:

  1. Pre-flight Validation:

    • Checks if containerClient is initialized
    • Gracefully exits if storage isn't configured (non-blocking failure)
    • Prevents runtime errors in environments without blob storage
  2. Container Management:

    • createIfNotExists(): Ensures the storage container exists
    • What this does: Creates the container if it doesn't exist, ignores if it does
    • Why it's important: Handles first-time deployments and container deletion scenarios
    • Permissions: Requires storage account contributor permissions
  3. Blob Client Creation:

    • getBlockBlobClient(blobName): Creates a client for the specific blob
    • Block Blob: Optimized for streaming large files and parallel uploads
    • Alternative Types: Page blobs (for VHDs), Append blobs (for logs)
    • Naming: Uses the provided blobName which includes timestamp and test info
  4. File Processing:

    • fs.readFileSync(filePath): Reads the entire file into memory
    • Synchronous Read: Simpler but blocks the thread
    • Memory Considerations: Entire file loaded into RAM (consider streaming for large files)
    • File Buffer: Binary representation ready for upload
  5. Upload Operation:

    • uploadData(fileBuffer, options): Uploads the file data to Azure
    • HTTP Headers: Sets content type as 'application/zip' for proper handling
    • Metadata: Adds custom metadata for tracking and organization
    • Metadata Fields:
      • uploadedAt: Timestamp for retention policies and debugging
      • source: Identifies the origin system for multi-tenant scenarios
  6. What Happens in Azure:

    • File is stored in the specified container with the given name
    • Metadata is indexed and searchable
    • Content type enables proper browser handling when downloaded
    • Azure automatically handles redundancy and durability

3. Storage Organization and Naming Strategy

The blob naming follows a structured pattern:

// Example blob names generated:
// report-2025-09-09T10-30-45-123Z.zip
// report-2025-09-09T14-15-22-456Z.zip

const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const blobName = `report-${timestamp}.zip`;
Enter fullscreen mode Exit fullscreen mode

Benefits of this naming strategy:

  • Chronological Sorting: Files sort naturally by creation time
  • Uniqueness: Timestamp ensures no naming conflicts
  • Readability: Easy to identify when tests ran
  • Automation Friendly: Parseable for cleanup scripts and analytics

3. Complete Upload Workflow

async onEnd(result: FullResult): Promise<void> {
  const shouldUpload = result.status === 'failed';

  if (shouldUpload) {
    try {
      // 1. Get report path
      const htmlReportPath = process.env.PLAYWRIGHT_HTML_REPORT || 
                           path.join(os.tmpdir(), 'playwright-html-report');

      // 2. Generate unique name with timestamp
      const timestamp = this.runTimestamp ?? new Date().toISOString().replace(/[:.]/g, '-');
      const zipName = `report-${timestamp}`;

      // 3. Compress report folder
      const zipPath = await zipReportFolder(htmlReportPath, zipName);
      log(`[Reporter] Zipped report at: ${zipPath}`);

      // 4. Upload to blob storage
      await uploadFileToBlobStorage(zipPath, `${zipName}.zip`);
      log('[Reporter] Upload complete.');

      // 5. Cleanup local file (optional)
      fs.unlinkSync(zipPath);

    } catch (err) {
      logError('[Reporter] Failed to zip or upload report:', err);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

The custom Playwright reporter is the bridge between your test execution and monitoring infrastructure. By understanding how it processes results, sends telemetry to Application Insights, and uploads artifacts to Blob Storage, you can:

  1. Monitor effectively with rich telemetry data
  2. Debug efficiently with comprehensive artifacts
  3. Scale reliably with optimized error handling
  4. Maintain visibility into your synthetic monitoring health

This reporting mechanism transforms raw test results into actionable insights, enabling proactive monitoring and rapid issue resolution in your applications.

The key to success is balancing comprehensive reporting with performance, ensuring your monitoring doesn't become a bottleneck while providing the visibility needed to maintain application reliability.

Additional Resources


Top comments (0)