DEV Community

Cover image for From AZ-204 to AI-200: What Changed and Why It Matters
Martin Oehlert
Martin Oehlert

Posted on

From AZ-204 to AI-200: What Changed and Why It Matters

Comparing the AZ-204 skill outline against the AI-200 course structure, roughly 60% of AZ-204 carries forward, 25% is dropped entirely, and AI-200 adds about 30% net-new content that AZ-204 never touched. Which side of that split you land on determines whether this transition is a week of review or a month of study. The gap is lopsided enough that you cannot assume existing knowledge transfers cleanly.

What Is AI-200?

Full name: AI-200: Azure AI Cloud Developer Associate

Format: Multiple choice, case studies, and scenario-based questions. Based on standard Microsoft exam format: approximately 40-60 questions, 100-minute window, passing score around 700/1000.

Timeline:

  • Beta exam: April 2026
  • General availability: July 2026 (estimated)
  • AZ-204 retirement: July 31, 2026

Course: The AI-200T00 instructor-led training course maps to seven learning paths that define the exam scope:

  1. Container Hosting: ACR, App Service containers, Container Apps, AKS
  2. Cosmos DB: NoSQL API with vector search and AI integration
  3. PostgreSQL Vector Search: pgvector, HNSW indexes, hybrid search
  4. Azure Managed Redis: data operations, event messaging, vector storage
  5. Backend Services: Service Bus, Event Grid, Azure Functions
  6. Secrets and Configuration: Key Vault, managed identities, App Configuration
  7. Observability: OpenTelemetry, Azure Monitor logs and metrics

What Carried Forward, What Got Dropped, What's New

Carried forward (~60% of AZ-204)

Most of the backend services survive: Azure Functions (triggers, bindings, Durable Functions), Service Bus, Event Grid, Key Vault, App Configuration, and managed identities all carry over. Several topics are expanded rather than simply retained. Container Apps now gets deeper coverage of KEDA scaling and Dapr integration. Cosmos DB adds vector search on top of the existing NoSQL API. Container Registry picks up ACR Tasks. And managed identities extend to AKS workload identity, which matters because AKS is one of the largest new additions. If you already hold AZ-204, this 60% is review, not new study.

Dropped (~25% of AZ-204)

Comparing the AZ-204 study guide against the AI-200 course outline, seven topics are gone entirely: Blob Storage SDK, MSAL/Identity Platform, Microsoft Graph, SAS tokens, API Management, Event Hubs, and Azure Container Instances. Microsoft removed the CRUD-oriented cloud app topics that do not serve AI workloads. You will not be tested on generating SAS tokens or calling Graph endpoints. If you spent weeks on MSAL token flows for AZ-204, that knowledge still applies to real projects, but it will not appear on AI-200.

Brand new (~30% of AI-200)

Based on the AI-200T00 course structure, AKS spans three modules and likely accounts for an estimated 20-25 exam questions. You need cluster creation with kubectl, ACR integration via --attach-acr, and scaling with HPA and cluster autoscaler. Configuration covers ConfigMaps, Secrets, Key Vault CSI Driver, persistent storage (Azure Disk for RWO, Azure Files for RWX), taints and tolerations, and the difference between resource requests and limits. Monitoring adds Container Insights, KQL queries for pod status and events, managed Prometheus, and alerting on OOMKills and resource exhaustion. This is the single largest new topic by question count.

PostgreSQL with pgvector is where AI-200 tests your understanding of vector databases, covering an estimated 8-11 questions across three modules. The foundation is Flexible Server provisioning with Entra ID auth and PgBouncer connection pooling. From there, you enable the pgvector extension for vector storage and work with distance operators: L2 (<->), cosine (<=>), and inner product (<#>). Batch embedding pipelines use Azure OpenAI to generate vectors at scale. Index optimization is where it gets specific: IVFFlat (partition-based, best under 100K vectors with frequent updates) versus HNSW (graph-based, best above 500K static vectors). Hybrid search combines vector similarity with metadata filters using standard WHERE clauses.

Azure Managed Redis replaces the narrow "Azure Cache for Redis" coverage from AZ-204 with a broader scope across an estimated 7-12 questions. Five core data types (strings, hashes, lists, sets, sorted sets) and caching patterns (cache-aside, write-through, write-behind) form the baseline. The exam also tests event messaging: Pub/Sub for fire-and-forget broadcasting versus Streams for durable at-least-once delivery with consumer groups (XREADGROUP, XACK). On the Enterprise tier, RediSearch enables vector similarity search using FLAT and HNSW indexes combined with tag, numeric, and text filters.

OpenTelemetry rounds out the new content with an estimated 5-7 questions. The Azure Monitor OpenTelemetry Distro provides a one-line setup via UseAzureMonitor(), replacing the proprietary Application Insights SDK. Custom spans use ActivitySource, custom metrics use Meter instruments, and W3C TraceContext propagation handles distributed trace correlation across services. Sampling strategies control telemetry volume and cost, which is the kind of production concern the exam now prioritizes.

Three Shifts Worth Understanding

The topic changes above are not random. They reflect a different definition of what an Azure developer does.

Vector databases replace blob storage

If your application retrieves context from a knowledge base before passing it to a language model, you are building a RAG pipeline, and the retrieval layer runs on one of three backends the exam now tests.

Cosmos DB supports vector search for globally distributed workloads. PostgreSQL with pgvector handles complex hybrid queries where you combine vector similarity with metadata filters in standard WHERE clauses. Redis provides low-latency vector retrieval on the Enterprise tier using FLAT and HNSW indexes. Each backend has different index types (IVFFlat, HNSW, FLAT), different distance operators, and different tradeoffs around dataset size and query complexity.

AZ-204 treated storage as a CRUD problem: upload blobs, set access tiers, generate SAS tokens. AI-200 treats storage as a search problem, and the skill gap between "call a PUT endpoint" and "choose the right index type for 500K embeddings" is not small.

AKS moves from infrastructure to developer concern

AI workloads need GPU-enabled nodes isolated from general compute, custom operators, and fine-grained resource limits. Container Apps cannot give you any of that. AI-200 assigns three full modules to AKS: deployment, configuration, and monitoring.

The exam expects you to select between Azure Disk (RWO) and Azure Files (RWX) storage classes, integrate secrets through the Key Vault CSI Driver, and manage node pools with taints and tolerations. Monitoring means writing KQL queries against Container Insights to diagnose pod failures and resource exhaustion. AZ-204 kept you at the Container Apps level, where Kubernetes was an implementation detail you never touched. That abstraction no longer holds when your inference service needs a dedicated A100 node pool with specific resource requests and limits.

OpenTelemetry replaces proprietary instrumentation

Your tracing code now works the same whether telemetry flows to Azure Monitor, Jaeger, or Datadog. The Application Insights SDK locked you into Microsoft's instrumentation API, Microsoft's backend, and Microsoft's query tools. AI-200 replaces that instrumentation layer with OpenTelemetry, the CNCF-backed open standard.

The Azure Monitor OpenTelemetry Distro makes setup a one-liner with UseAzureMonitor(), but the exam goes deeper. Custom instrumentation means creating spans with ActivitySource and recording metrics with Meter instruments. Distributed trace correlation relies on W3C TraceContext headers propagated across service boundaries. Sampling configuration controls telemetry volume, which directly affects cost at scale. Azure Monitor still serves as the analysis backend; what changed is the instrumentation contract.

What This Means for Your Study Plan

If you already hold AZ-204: roughly 60% carries forward. Your knowledge of Azure Functions, Service Bus, Event Grid, Key Vault, managed identities, and Cosmos DB basics is still valid. The gap areas are AKS (the largest single investment if you have not worked with Kubernetes), PostgreSQL with pgvector, Azure Managed Redis vector storage and Streams, and OpenTelemetry custom instrumentation. Budget 3-4 weeks of focused study on those new topics, then 1 week reviewing carried-over material to make sure nothing has shifted in scope.

If you are currently studying for AZ-204: you have a decision to make before July 31, 2026. If you are close to passing, finish it: the credential stays valid through its full renewal cycle. If you are early in your studies, pivot to AI-200 now and skip the dropped topics entirely. There is no reason to invest time in the Blob Storage SDK, MSAL, Microsoft Graph, API Management, or Event Hubs when those topics will not appear on the replacement exam.

If you are starting fresh: go directly to AI-200. The AI-200T00 course structure and Microsoft Learn paths give you everything you need; AZ-204 material adds no value at this point.

The 8-week plan below assumes you are starting from scratch or pivoting from early AZ-204 study:

8-week AI-200 study plan

The AI-200T00 course and the Microsoft Learn paths aligned to each domain are the primary resources once they publish alongside the beta exam. Are you finishing AZ-204 before July or pivoting to AI-200 now?

Top comments (0)