<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anton Staykov</title>
    <description>The latest articles on DEV Community by Anton Staykov (@astaykov).</description>
    <link>https://dev.to/astaykov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/astaykov"/>
    <language>en</language>
    <item>
      <title>Paying to Be the Product: The AI Privacy Illusion Nobody's Talking About</title>
      <dc:creator>Anton Staykov</dc:creator>
      <pubDate>Mon, 27 Apr 2026 13:17:33 +0000</pubDate>
      <link>https://dev.to/astaykov/paying-to-be-the-product-the-ai-privacy-illusion-nobodys-talking-about-3ncc</link>
      <guid>https://dev.to/astaykov/paying-to-be-the-product-the-ai-privacy-illusion-nobodys-talking-about-3ncc</guid>
      <description>&lt;p&gt;&lt;em&gt;In tech, we've always said: "If you're not paying for the product — you are the product." In today's AI reality, that statement needs an update: we are paying to be the product.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I spend my days working in identity and security at Microsoft. But when I go home, I'm just another consumer — using AI assistants across every major platform, paying for premium tiers, trusting that my subscription buys me not just better answers, but basic respect for my data.&lt;/p&gt;

&lt;p&gt;That trust led me down a rabbit hole I didn't expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Moment That Started This
&lt;/h2&gt;

&lt;p&gt;A few weeks ago, I was reviewing the privacy settings on Google Gemini Advanced — a service I pay for — and I realized something unsettling. To opt out of having my conversations used for AI model training, I had to turn off &lt;strong&gt;"Keep Activity"&lt;/strong&gt;. Sounds reasonable, right?&lt;/p&gt;

&lt;p&gt;Except turning off Keep Activity doesn't just stop training. It disables &lt;strong&gt;everything&lt;/strong&gt;: chat history, personalization, context across sessions, the ability to revisit old conversations. Every interaction becomes a blank slate. The AI assistant I'm paying a premium for becomes, functionally, lobotomized.&lt;/p&gt;

&lt;p&gt;The choice Google presents to its paying customers is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Option A&lt;/strong&gt;: Full-featured Gemini that uses your data to train models.&lt;br&gt;
&lt;strong&gt;Option B&lt;/strong&gt;: A crippled Gemini that remembers nothing — but hey, your data isn't used for training. &lt;em&gt;Mostly.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'll come back to that "mostly" in a moment.&lt;/p&gt;

&lt;p&gt;This bothered me enough that I decided to audit the privacy policies of &lt;strong&gt;every major AI assistant&lt;/strong&gt; I use. Not surface-level marketing claims — the actual terms, the actual privacy notices, the actual small print.&lt;/p&gt;

&lt;p&gt;What I found was... illuminating.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Audit: Six Providers, One Question
&lt;/h2&gt;

&lt;p&gt;The question was simple: &lt;strong&gt;As a paying individual consumer, can I opt out of my data being used to train AI models without degrading the service I'm paying for?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I examined: Google Gemini, OpenAI ChatGPT, Anthropic Claude, Microsoft Copilot, Mistral Le Chat, and Perplexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Good News
&lt;/h3&gt;

&lt;p&gt;Every single provider now offers some form of opt-out mechanism for consumers. That's progress. A year or two ago, that wasn't the case across the board.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Bad News
&lt;/h3&gt;

&lt;p&gt;The devil is in the details — and those details vary wildly.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Small Print Actually Says
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Google Gemini: The Opt-Out That Costs You the Product
&lt;/h3&gt;

&lt;p&gt;Google's &lt;a href="https://support.google.com/gemini/answer/13594961#privacy_notice" rel="noopener noreferrer"&gt;Gemini Apps Privacy Notice&lt;/a&gt; deserves close reading — because the more you read, the less clear it becomes.&lt;/p&gt;

&lt;p&gt;The only mechanism to opt out of AI model training is to turn off &lt;strong&gt;"Keep Activity"&lt;/strong&gt;. Here's what that actually does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ Your chat history is &lt;strong&gt;no longer saved&lt;/strong&gt; — chats are retained for 72 hours, then deleted&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;No personalization&lt;/strong&gt; — Gemini doesn't learn from your past conversations&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;No context across sessions&lt;/strong&gt; — every chat starts from zero&lt;/li&gt;
&lt;li&gt;❌ &lt;strong&gt;You can't go back to old conversations&lt;/strong&gt; — they're gone after 72 hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On every other platform I tested, opting out of training is a standalone toggle. You keep your history, your personalization, your context. On Gemini, opting out of training means opting out of the product being useful.&lt;/p&gt;

&lt;p&gt;But it gets more interesting. Google's own documentation appears to contradict itself on the same page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In the "Configuring your settings" section&lt;/strong&gt;, the Privacy Notice states:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"The settings in Gemini Apps Activity **don't control processing of your chats to create anonymized data&lt;/em&gt;* to improve Google services."*&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This suggests anonymized data still feeds into service improvement — which Google defines elsewhere on the same page as including &lt;em&gt;"generative AI models and other machine-learning technologies."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Yet in the FAQ section&lt;/strong&gt; on the very same page, under "What does the Keep Activity setting control?":&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"If Keep Activity is off and you don't submit feedback, Google also **does not use your future chats to improve its AI models&lt;/em&gt;&lt;em&gt;."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These two statements are in tension. Does anonymized data still get used for model improvement when Keep Activity is off, or doesn't it? Google's own privacy documentation gives you both answers on the same page.&lt;/p&gt;

&lt;p&gt;And regardless of which interpretation you trust, one clause remains unambiguous — under "How long we retain your data":&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Chats reviewed by human reviewers (and related data like your language, device type, location info, or feedback) **are not deleted when you delete your activity&lt;/em&gt;&lt;em&gt;. Instead, they are retained for **up to three years&lt;/em&gt;&lt;em&gt;."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To summarize: the only opt-out available degrades your paid product to near-uselessness, the documentation contradicts itself on whether anonymized data is still used, and human-reviewed chats persist for three years even after you delete them.&lt;/p&gt;

&lt;p&gt;As a paying customer, this doesn't feel like a privacy control. It feels like an illusion of control — wrapped in contradictory language that would require a lawyer to parse.&lt;/p&gt;

&lt;p&gt;📄 &lt;a href="https://support.google.com/gemini/answer/13594961#privacy_notice" rel="noopener noreferrer"&gt;Gemini Apps Privacy Notice&lt;/a&gt; · &lt;a href="https://policies.google.com/privacy" rel="noopener noreferrer"&gt;Google Privacy Policy&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  OpenAI ChatGPT: Clean Opt-Out, One Loophole
&lt;/h3&gt;

&lt;p&gt;OpenAI's approach is significantly better. The "Improve the model for everyone" toggle under &lt;strong&gt;Settings &amp;gt; Data Controls&lt;/strong&gt; is independent of your chat history. You can keep your conversations, keep your context, and still opt out of training.&lt;/p&gt;

&lt;p&gt;But there's a catch, documented in their own help center:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Even if you have opted out of training, you can still choose to provide feedback... If you choose to provide feedback, **the entire conversation associated with that feedback may be used to train our models&lt;/em&gt;&lt;em&gt;."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That thumbs-up button you casually tap? It potentially feeds your entire conversation into the training pipeline — even if you've opted out.&lt;/p&gt;

&lt;p&gt;📄 &lt;a href="https://openai.com/policies/row-terms-of-use/" rel="noopener noreferrer"&gt;OpenAI Terms of Use&lt;/a&gt; · &lt;a href="https://help.openai.com/en/articles/5722486-how-your-data-is-used-to-improve-model-performance" rel="noopener noreferrer"&gt;How your data is used to improve model performance&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Anthropic Claude: Honest About Its Exceptions
&lt;/h3&gt;

&lt;p&gt;Anthropic updated its consumer terms in August 2025, and to their credit, they're transparent about what opt-out does and doesn't cover. From &lt;a href="https://www.anthropic.com/legal/consumer-terms" rel="noopener noreferrer"&gt;Section 4 of their Consumer Terms&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"We may use Materials to provide, maintain, and improve the Services... **unless you opt out&lt;/em&gt;* of training through your account settings. Even if you opt out, we will use Materials for model training when: &lt;strong&gt;(1) you provide Feedback&lt;/strong&gt; to us regarding any Materials, or &lt;strong&gt;(2) your Materials are flagged for safety review&lt;/strong&gt;..."*&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Two explicit carve-outs. But at least they tell you upfront, and the opt-out doesn't degrade your service.&lt;/p&gt;

&lt;p&gt;📄 &lt;a href="https://www.anthropic.com/legal/consumer-terms" rel="noopener noreferrer"&gt;Anthropic Consumer Terms&lt;/a&gt; · &lt;a href="https://www.anthropic.com/policies" rel="noopener noreferrer"&gt;Anthropic Privacy Policy&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Microsoft Copilot: Separate Toggles, No Training Carve-Outs
&lt;/h3&gt;

&lt;p&gt;Full disclosure: I work at Microsoft. I'm including Copilot because excluding it would be intellectually dishonest, and my job doesn't exempt me from critical evaluation.&lt;/p&gt;

&lt;p&gt;Copilot offers independent toggles for personalization, memory, and model training. The &lt;a href="https://support.microsoft.com/en-us/topic/microsoft-copilot-privacy-controls-8e479f27-6eb6-48c5-8d6a-c134062e2be6" rel="noopener noreferrer"&gt;Copilot Privacy Controls&lt;/a&gt; page states:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Opting out will exclude your future conversation activities from being used for training these AI models."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There's a caveat that data may still be used for &lt;em&gt;"general product or system improvements... digital safety, security, and compliance"&lt;/em&gt; — but this is explicitly separated from model training.&lt;/p&gt;

&lt;p&gt;📄 &lt;a href="https://www.microsoft.com/servicesagreement" rel="noopener noreferrer"&gt;Microsoft Services Agreement&lt;/a&gt; · &lt;a href="https://support.microsoft.com/en-us/topic/microsoft-copilot-privacy-controls-8e479f27-6eb6-48c5-8d6a-c134062e2be6" rel="noopener noreferrer"&gt;Copilot Privacy Controls&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Mistral Le Chat: The GDPR Advantage
&lt;/h3&gt;

&lt;p&gt;Being a French/EU company subject to GDPR, Mistral's approach is notably clean. The training opt-out is a &lt;a href="https://help.mistral.ai/en/articles/455207-can-i-opt-out-of-my-input-or-output-data-being-used-for-training" rel="noopener noreferrer"&gt;simple toggle&lt;/a&gt; — independent of chat functionality. Paid Pro users are opted out by default.&lt;/p&gt;

&lt;p&gt;The one caveat: user feedback (ratings/comments) is always used regardless of your opt-out setting. But chat content, uploaded documents, and conversation history are respected.&lt;/p&gt;

&lt;p&gt;📄 &lt;a href="https://mistral.ai/terms/" rel="noopener noreferrer"&gt;Mistral Terms of Service&lt;/a&gt; · &lt;a href="https://help.mistral.ai/en/articles/455207-can-i-opt-out-of-my-input-or-output-data-being-used-for-training" rel="noopener noreferrer"&gt;Mistral Opt-Out FAQ&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Perplexity: The Wildcard
&lt;/h3&gt;

&lt;p&gt;Perplexity offers an opt-out toggle for AI data retention, but the company is currently facing a &lt;a href="https://ca.pcmag.com/ai/15037/use-perplexity-lawsuit-accuses-it-of-sharing-personal-data-with-google" rel="noopener noreferrer"&gt;class-action lawsuit&lt;/a&gt; alleging that user data — including conversations in "Incognito" mode — was shared with Meta and Google for ad targeting regardless of privacy settings.&lt;/p&gt;

&lt;p&gt;The lawsuit is ongoing, and allegations aren't findings. But it's a reminder that a toggle in a settings page is only as trustworthy as the infrastructure behind it.&lt;/p&gt;

&lt;p&gt;📄 &lt;a href="https://www.perplexity.ai/hub/legal/terms-of-service" rel="noopener noreferrer"&gt;Perplexity Terms of Service&lt;/a&gt; · &lt;a href="https://www.perplexity.ai/hub/legal/privacy-policy" rel="noopener noreferrer"&gt;Perplexity Privacy Policy&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Summary That Should Concern You
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Opt-out degrades service?&lt;/th&gt;
&lt;th&gt;Data still used after opt-out?&lt;/th&gt;
&lt;th&gt;Exceptions&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google Gemini&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;❌ Yes — kills history &amp;amp; personalization&lt;/td&gt;
&lt;td&gt;⚠️ Contradictory — documentation says both yes and no&lt;/td&gt;
&lt;td&gt;3-year retention of reviewed chats; contradictory policy language&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OpenAI ChatGPT&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ No&lt;/td&gt;
&lt;td&gt;⚠️ Feedback triggers training&lt;/td&gt;
&lt;td&gt;Thumbs up/down = full conversation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Anthropic Claude&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ No&lt;/td&gt;
&lt;td&gt;⚠️ Feedback + safety flags&lt;/td&gt;
&lt;td&gt;Explicitly documented&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Microsoft Copilot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ No&lt;/td&gt;
&lt;td&gt;✅ No carve-outs for training&lt;/td&gt;
&lt;td&gt;Safety/compliance retention only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Mistral Le Chat&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ No&lt;/td&gt;
&lt;td&gt;✅ No (paid users auto-opted-out)&lt;/td&gt;
&lt;td&gt;Feedback always used&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Perplexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✅ No&lt;/td&gt;
&lt;td&gt;⚠️ Under litigation&lt;/td&gt;
&lt;td&gt;Alleged sharing despite settings&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Where Is the EU on This?
&lt;/h2&gt;

&lt;p&gt;Here's what genuinely surprises me.&lt;/p&gt;

&lt;p&gt;I've spent years grumbling about EU cookie consent banners. Every website, every visit, the same tedious popups — all in the name of protecting consumer privacy for a few tracking pixels.&lt;/p&gt;

&lt;p&gt;Yet here we are in 2026, and &lt;strong&gt;AI providers are training models on paying consumers' conversations&lt;/strong&gt; with opt-out mechanisms that range from "genuine but imperfect" to "functionally deceptive" — and the regulatory silence is deafening.&lt;/p&gt;

&lt;p&gt;The GDPR was designed for exactly this scenario. Article 21 grants the right to object to data processing. Article 7 requires that consent be as easy to withdraw as it is to give. When Google forces you to choose between a functional product and privacy, is that really free consent?&lt;/p&gt;

&lt;p&gt;I'm not calling for more regulation for the sake of it — heaven knows we have enough cookie banners. But if Europe's privacy framework means anything, this is precisely where it should be applied. Not to cookies. To the AI models being trained on our most intimate conversations with technology.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Want You to Take Away
&lt;/h2&gt;

&lt;p&gt;This isn't a "don't use AI" article. I use all of these tools daily. They're transformative. But:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check your settings. Today.&lt;/strong&gt; Most providers default to training-on. The opt-out exists, but you have to find it and flip it yourself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Read the exceptions.&lt;/strong&gt; An opt-out toggle means nothing if the small print carves out half your data anyway. Know what "opt-out" actually means for each provider you use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Be thoughtful about feedback.&lt;/strong&gt; That casual thumbs-up on a response? On some platforms, it opens your entire conversation to training — even if you've opted out.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Evaluate whether your paid tier actually buys you privacy.&lt;/strong&gt; For some providers, it does. For others, you're just paying for better answers while your data flows into the same training pipeline as free users.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Demand better.&lt;/strong&gt; As consumers, as technologists, as an industry. The right to use a product you pay for without surrendering your conversations to model training shouldn't be a premium feature. It should be the default.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;The research in this article reflects publicly available terms of service and privacy policies as of April 2026. All quotes are sourced directly from official provider documentation, linked inline. Views expressed are my own and do not represent my employer.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>privacy</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>From OBO APIs to Agent Identities: Entra Conditional Access Still Works the Same</title>
      <dc:creator>Anton Staykov</dc:creator>
      <pubDate>Mon, 16 Mar 2026 22:09:21 +0000</pubDate>
      <link>https://dev.to/astaykov/from-obo-apis-to-agent-identities-entra-conditional-access-still-works-the-same-140c</link>
      <guid>https://dev.to/astaykov/from-obo-apis-to-agent-identities-entra-conditional-access-still-works-the-same-140c</guid>
      <description>&lt;p&gt;Six years ago I wrote &lt;a href="https://github.com/Dayzure/AzureSQLDelegatedAuth" rel="noopener noreferrer"&gt;a small sample&lt;/a&gt; to help me better understand how the On-Behalf-Of (OBO) flow actually works — a browser SPA calling a middle-tier Web API, which called Azure SQL &lt;strong&gt;on behalf of&lt;/strong&gt; the signed-in user. Today it might be an assistive AI agent calling a MCP Server with the delegated constrains of the end-user. Same rules apply. This article explains why. &lt;/p&gt;

&lt;p&gt;Three sentences that should survive beyond any particular technology:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If your system acts on behalf of a user, delegation rules apply.&lt;br&gt;
If delegation rules apply, Conditional Access applies at token issuance for the downstream resource.&lt;br&gt;
APIs, agents, and MCP servers don't change that — they just change the shape of the middle tier.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If an AI agent acts on behalf of a user, your existing Conditional Access policies that govern how users access corporate data &lt;strong&gt;already apply&lt;/strong&gt; — automatically. You don't need to invent "agent-specific Conditional Access" for assistive agents. Assistive agents don't bypass Conditional Access. They inherit it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you understand OBO and Conditional Access from classic Web APIs, you already understand how Entra Agent ID behaves for assistive AI agents.&lt;/strong&gt; Delegation semantics didn't change. Conditional Access behavior didn't change. Only the shape of the middle tier changed.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Classic Delegated Access Pattern
&lt;/h2&gt;

&lt;p&gt;The baseline architecture is the standard delegated access chain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User
  ↓
SPA (interactive client)
  ↓  access token  [aud = Web API]
Web API (confidential client)
  ↓  OBO token     [aud = Resource]
Azure SQL  |  MCP Server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two facts to hold firmly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Web API &lt;strong&gt;never authenticates as itself&lt;/strong&gt; to the downstream resource. It always acts &lt;strong&gt;on behalf of the signed-in user&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The downstream resource — Azure SQL, Microsoft Graph, or an MCP server — sees a delegated token. It evaluates the user's identity and the delegated scopes. Conditional Access policies have already been enforced by Entra during token issuance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. The Web API has a real identity — and delegation is explicitly granted
&lt;/h2&gt;

&lt;p&gt;This is the detail most diagrams omit, and it matters in production.&lt;/p&gt;

&lt;p&gt;The Web API is not just "code that runs somewhere". In Microsoft Entra it is a registered application with its own identity. That identity authenticates to the token endpoint as a &lt;strong&gt;confidential client&lt;/strong&gt;, using a client secret or certificate.&lt;/p&gt;

&lt;p&gt;In a real OBO implementation, the Web API does two things simultaneously:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Authenticates as itself&lt;/strong&gt; — proving "I am the Web API" using its own credentials.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Requests a delegated token&lt;/strong&gt; for the downstream resource &lt;em&gt;on behalf of&lt;/em&gt; the user, presenting the incoming user token as an assertion.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Critically, the capability to do OBO is &lt;strong&gt;not implicit&lt;/strong&gt;. The Web API's identity must be &lt;strong&gt;explicitly granted&lt;/strong&gt; the delegated permissions (scopes) for the downstream resource. The tenant administrator grants this; the Web API cannot do it on its own authority.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Practical framing:&lt;/strong&gt; the Web API authenticates as itself, but it can only mint delegated tokens because the tenant has explicitly authorized it to act on the user's behalf for the specific downstream resource.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  3. OBO in 90 Seconds
&lt;/h2&gt;

&lt;p&gt;The flow has three steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp8n8qs1zxwiazziuara.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flp8n8qs1zxwiazziuara.png" alt=" " width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user signs in to the client and the client receives an access token whose audience is the Web API. The client uses that access token to call the Web API.&lt;/li&gt;
&lt;li&gt;The Web API calls the token endpoint and exchanges that client token for a new one.&lt;/li&gt;
&lt;li&gt;The resulting token is minted for the downstream resource and carries the user's identity from step 1 and delegated scopes from step 2.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In protocol terms, the exchange looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;POST /&amp;lt;tenant_id&amp;gt;/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded

grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer
&amp;amp;assertion=&amp;lt;incoming_user_access_token&amp;gt;
&amp;amp;requested_token_use=on_behalf_of
&amp;amp;scope=https://database.windows.net/.default
&amp;amp;client_id=&amp;lt;api_client_id&amp;gt;
&amp;amp;client_secret=&amp;lt;api_client_secret&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note both elements in the same request: &lt;strong&gt;client authentication&lt;/strong&gt; (&lt;code&gt;client_id&lt;/code&gt; + &lt;code&gt;client_secret&lt;/code&gt;) and the &lt;strong&gt;user assertion&lt;/strong&gt; (&lt;code&gt;assertion&lt;/code&gt;). Both are required.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The invariant:&lt;/strong&gt; OBO is strictly delegated. It uses delegated scopes, not application roles. Application permissions are not involved.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  4. Conditional Access: who is targeted vs. who "feels" it
&lt;/h2&gt;

&lt;p&gt;This is where real misunderstanding happens — and the confusion compounds when agents enter the picture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 1 — CA is assigned to the user and targets the resource.&lt;/strong&gt;&lt;br&gt;
A Conditional Access policy that requires MFA to access Azure SQL (or an MCP Server) is scoped to the user and the downstream resource. Evaluation happens when Microsoft Entra issues the delegated token for that resource — at the moment the OBO token is minted. Not when the user signs into the SPA. Not when the middle tier starts up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 2 — The middle tier is not a CA subject.&lt;/strong&gt;&lt;br&gt;
Even though the Web API has its own registered identity, it is not the subject of a CA policy intended to govern how users access corporate data. The CA policy constrains the &lt;strong&gt;user's&lt;/strong&gt; access, not the middle-tier service identity. This remains true regardless of what the middle tier is: Web API, assistive AI Agent, MCP Server, or anything else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 3 — The middle tier still experiences CA outcomes.&lt;/strong&gt;&lt;br&gt;
When the Web API calls the token endpoint to acquire an OBO token, CA enforcement happens there. If CA requires step-up (like MFA), the token endpoint returns &lt;code&gt;interaction_required&lt;/code&gt; with a &lt;strong&gt;claims challenge&lt;/strong&gt;. The middle tier cannot satisfy this itself. It must surface this challenge back to the interactive client (the SPA in our case). The user performs the MFA, the client obtains a new token, and the middle tier retries the OBO exchange.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The one-liner:&lt;/strong&gt; the middle tier is a CA messenger, not a CA subject.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  5. Replacing the Web API with an (assistive) AI Agent (with an Agent Identity)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0swbhwbouu2f3oyrvaj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0swbhwbouu2f3oyrvaj0.png" alt=" " width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The transition to Entra Agent ID is explicit and mechanical:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;Classic World&lt;/th&gt;
&lt;th&gt;Agentic World&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Middle Actor&lt;/td&gt;
&lt;td&gt;Web API&lt;/td&gt;
&lt;td&gt;AI Agent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Identity Type&lt;/td&gt;
&lt;td&gt;app reg.&lt;/td&gt;
&lt;td&gt;Agent Identity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Access Mode&lt;/td&gt;
&lt;td&gt;Delegated (OBO)&lt;/td&gt;
&lt;td&gt;Delegated (OBO)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CA Evaluation&lt;/td&gt;
&lt;td&gt;On the resource&lt;/td&gt;
&lt;td&gt;On the resource&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Nothing in the right column is new behavior. When using Entra Agent ID in delegated mode — the appropriate mode for an assistive agent acting on behalf of a signed-in user — three requirements apply:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Agent Identity must be &lt;strong&gt;granted the delegated permissions&lt;/strong&gt; needed for the downstream resource.&lt;/li&gt;
&lt;li&gt;The Agent Identity acquires a &lt;strong&gt;delegated token&lt;/strong&gt; representing the user, carrying their identity and scopes.&lt;/li&gt;
&lt;li&gt;The Agent Identity is &lt;strong&gt;constrained by Conditional Access&lt;/strong&gt; policies that govern the user's access to that resource.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The core continuity:&lt;/strong&gt; you replaced the middle-tier actor, not the trust boundary.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  6. MCP Server as the Downstream Resource
&lt;/h2&gt;

&lt;p&gt;MCP servers can feel "agent-native," which leads engineers to assume the security model has changed around them. It hasn't.&lt;/p&gt;

&lt;p&gt;An MCP Server is an &lt;strong&gt;OAuth-protected resource&lt;/strong&gt; — a server that accepts access tokens. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do not pass tokens through.&lt;/strong&gt; Equally valid for the AI agent acting on-behalf-of the user, but also for an MCP Server requesting authorizations to further resources on-behalf-of the user. &lt;/p&gt;

&lt;p&gt;Beyond that, whether the downstream resource is Azure SQL, Microsoft Graph, or anything else, the CA and delegation rules are identical. From Microsoft Entra's perspective it is an OAuth-protected resource. Tokens are issued for it, delegated permissions are scoped to it, and Conditional Access policies target it.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. What to Actually Configure
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Model your chain.&lt;/strong&gt; Identify the downstream resource (REST API or MCP server) and the middle &lt;em&gt;actor&lt;/em&gt; (Web API or agent identity in delegated mode). &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Make delegation explicit.&lt;/strong&gt; Confirm the middle actor is granted the &lt;em&gt;delegated&lt;/em&gt; permissions required for the downstream resource.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Treat CA challenges as a product requirement.&lt;/strong&gt; Expect &lt;code&gt;interaction_required&lt;/code&gt; responses and claims challenges during downstream token acquisition. In your agent's backend, implement a handler for &lt;code&gt;MsalUiRequiredException&lt;/code&gt; (for example). Design the user interaction path to satisfy step-up — do not swallow these errors.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Never pass tokens through.&lt;/strong&gt; Validate the incoming token for the agent audience, then acquire a fresh token for any downstream resource.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>microsoft</category>
      <category>entra</category>
    </item>
  </channel>
</rss>
