<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chandan Galani</title>
    <description>The latest articles on DEV Community by Chandan Galani (@lola0786).</description>
    <link>https://dev.to/lola0786</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lola0786"/>
    <language>en</language>
    <item>
      <title>AGENTS CONTROL PLANE</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Thu, 08 Jan 2026 12:08:32 +0000</pubDate>
      <link>https://dev.to/lola0786/agents-control-plane-27bc</link>
      <guid>https://dev.to/lola0786/agents-control-plane-27bc</guid>
      <description>&lt;p&gt;We built a deterministic control layer that sits between AI agents &lt;br&gt;
and production systems. In healthcare, this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pre-execution validation (HIPAA, FDA compliance)&lt;/li&gt;
&lt;li&gt;Immutable audit trail (survives admin wipes)&lt;/li&gt;
&lt;li&gt;Shadow mode testing (zero patient impact)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We just ran 284K transactions through shadow mode and caught $25K+ &lt;br&gt;
in risky actions that would've violated policies.&lt;/p&gt;

&lt;p&gt;Would you be open to a 15min call? Happy to show how this maps to &lt;br&gt;
clinical compliance workflows.&lt;/p&gt;

&lt;p&gt;Live demo API: &lt;a href="http://galani-alb-2013282884.ap-south-1.elb.amazonaws.com" rel="noopener noreferrer"&gt;http://galani-alb-2013282884.ap-south-1.elb.amazonaws.com&lt;/a&gt;&lt;br&gt;
GitHub: &lt;a href="https://github.com/LOLA0786/PrivateVault-Mega-Repo" rel="noopener noreferrer"&gt;https://github.com/LOLA0786/PrivateVault-Mega-Repo&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Intent should have rules — not just algorithms.

We’re building an Intent Social Discovery platform that understands intent before action.

Coming 2/26.</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Fri, 02 Jan 2026 03:51:28 +0000</pubDate>
      <link>https://dev.to/lola0786/intent-should-have-rules-not-just-algorithms-were-building-an-intent-social-discovery-20af</link>
      <guid>https://dev.to/lola0786/intent-should-have-rules-not-just-algorithms-were-building-an-intent-social-discovery-20af</guid>
      <description></description>
    </item>
    <item>
      <title>Intent should have rules — not just algorithms.

We’re building an Intent Social Discovery platform that understands intent before action.

Coming 2/26.</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Wed, 31 Dec 2025 07:54:34 +0000</pubDate>
      <link>https://dev.to/lola0786/intent-should-have-rules-not-just-algorithms-were-building-an-intent-social-discovery-1i4c</link>
      <guid>https://dev.to/lola0786/intent-should-have-rules-not-just-algorithms-were-building-an-intent-social-discovery-1i4c</guid>
      <description></description>
    </item>
    <item>
      <title>Thoughts become intent.
Intent should have rules — not just algorithms.

We’re building an Intent Social Discovery platform that understands intent before action.

Coming 2/26.</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Wed, 31 Dec 2025 07:54:21 +0000</pubDate>
      <link>https://dev.to/lola0786/thoughts-become-intent-intent-should-have-rules-not-just-algorithms-were-building-an-4e3h</link>
      <guid>https://dev.to/lola0786/thoughts-become-intent-intent-should-have-rules-not-just-algorithms-were-building-an-4e3h</guid>
      <description></description>
    </item>
    <item>
      <title>How should AI actions be authorized before execution? Built a small GPT to explore this</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Sun, 28 Dec 2025 07:55:12 +0000</pubDate>
      <link>https://dev.to/lola0786/how-should-ai-actions-be-authorized-before-execution-built-a-small-gpt-to-explore-this-1gh6</link>
      <guid>https://dev.to/lola0786/how-should-ai-actions-be-authorized-before-execution-built-a-small-gpt-to-explore-this-1gh6</guid>
      <description>&lt;p&gt;We’ve been thinking a lot about how agentic AI systems should be controlled before they touch real systems (money, data, prod infra).&lt;/p&gt;

&lt;p&gt;We put together a small public GPT that simulates pre-execution authorization:&lt;/p&gt;

&lt;p&gt;You describe a proposed action&lt;/p&gt;

&lt;p&gt;It returns ALLOW or BLOCK, with reasons&lt;/p&gt;

&lt;p&gt;No execution, no enforcement — just evaluation&lt;/p&gt;

&lt;p&gt;It’s meant as a discussion artifact, not a product pitch.&lt;/p&gt;

&lt;p&gt;Link:&lt;br&gt;
&lt;a href="https://chatgpt.com/g/g-6950ce624e988191a12212c322711656-uaal-pre-execution-authorization-simulator" rel="noopener noreferrer"&gt;https://chatgpt.com/g/g-6950ce624e988191a12212c322711656-uaal-pre-execution-authorization-simulator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Genuinely curious:&lt;/p&gt;

&lt;p&gt;How are others thinking about authorization vs execution for AI agents?&lt;/p&gt;

&lt;p&gt;Should this live inside the agent, or outside as a control plane?&lt;/p&gt;

&lt;p&gt;Happy to take criticism — this is early thinking.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>security</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Establishing the zero-trust standard for AI agents. We want to move the industry from just "monitoring" AI to actually "controlling" it—where every autonomous action is replay-verifiable, CI-enforced, and cryptographically proven.</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Thu, 25 Dec 2025 09:30:41 +0000</pubDate>
      <link>https://dev.to/lola0786/establishing-the-zero-trust-standard-for-ai-agents-we-want-to-move-the-industry-from-just-5f0a</link>
      <guid>https://dev.to/lola0786/establishing-the-zero-trust-standard-for-ai-agents-we-want-to-move-the-industry-from-just-5f0a</guid>
      <description></description>
    </item>
    <item>
      <title>I’m building a deterministic policy firewall for AI systems — looking for technical feedback</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Thu, 25 Dec 2025 05:58:27 +0000</pubDate>
      <link>https://dev.to/lola0786/im-building-a-deterministic-policy-firewall-for-ai-systems-looking-for-technical-feedback-2n71</link>
      <guid>https://dev.to/lola0786/im-building-a-deterministic-policy-firewall-for-ai-systems-looking-for-technical-feedback-2n71</guid>
      <description>&lt;p&gt;I’ve been working on a small but opinionated system and would love technical feedback from people who’ve dealt with AI in regulated or high-risk environments.&lt;/p&gt;

&lt;p&gt;The core idea is simple:&lt;/p&gt;

&lt;p&gt;AI systems can &lt;em&gt;propose&lt;/em&gt; actions.&lt;br&gt;&lt;br&gt;
Something else must decide whether those actions are allowed to execute.&lt;/p&gt;

&lt;p&gt;This project is not about “understanding intent” perfectly.&lt;br&gt;&lt;br&gt;
Intent normalization is deliberately lossy (regex / LLM / upstream systems).&lt;/p&gt;

&lt;p&gt;The invariant is a deterministic policy layer that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;blocks unsafe or illegal execution&lt;/li&gt;
&lt;li&gt;fails closed when inputs are ambiguous&lt;/li&gt;
&lt;li&gt;produces a tamper-evident audit trail&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it as an execution firewall or control plane for AI agents.&lt;/p&gt;

&lt;p&gt;I’ve tested it across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;fintech (loan approvals, AML-style constraints)&lt;/li&gt;
&lt;li&gt;healthtech (prescription safety, controlled substances, pregnancy)&lt;/li&gt;
&lt;li&gt;legal (M&amp;amp;A, antitrust thresholds)&lt;/li&gt;
&lt;li&gt;insurance, e-commerce, and government scenarios&lt;/li&gt;
&lt;li&gt;including unstructured natural-language inputs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is early-stage and intentionally conservative.&lt;br&gt;&lt;br&gt;
False positives escalate; false negatives are unacceptable.&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/LOLA0786/Intent-Engine-Api" rel="noopener noreferrer"&gt;https://github.com/LOLA0786/Intent-Engine-Api&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’m &lt;em&gt;not&lt;/em&gt; looking for product feedback — mainly architectural criticism:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Where does this break down?&lt;/li&gt;
&lt;li&gt;What would you challenge if you were deploying this?&lt;/li&gt;
&lt;li&gt;What’s missing at the execution boundary?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy to clarify assumptions.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>security</category>
      <category>discuss</category>
      <category>ai</category>
    </item>
    <item>
      <title>We built an API that lets AI systems check if humans actually care before acting.
It’s a simple intent-verification gate for AI agents.
Early access, prepaid usage.</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Tue, 23 Dec 2025 06:18:44 +0000</pubDate>
      <link>https://dev.to/lola0786/we-built-an-api-that-lets-ai-systems-check-if-humans-actually-care-before-acting-its-a-simple-4p9o</link>
      <guid>https://dev.to/lola0786/we-built-an-api-that-lets-ai-systems-check-if-humans-actually-care-before-acting-its-a-simple-4p9o</guid>
      <description>&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
        &lt;div class="c-embed__cover"&gt;
          &lt;a href="https://lnkd.in/eG2Xqb-f" class="c-link align-middle" rel="noopener noreferrer"&gt;
            &lt;img alt="" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.licdn.com%2Fscds%2Fcommon%2Fu%2Fimages%2Flogos%2Ffavicons%2Fv1%2Ffavicon.ico" height="64" class="m-0" width="64"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="c-embed__body"&gt;
        &lt;h2 class="fs-xl lh-tight"&gt;
          &lt;a href="https://lnkd.in/eG2Xqb-f" rel="noopener noreferrer" class="c-link"&gt;
            LinkedIn
          &lt;/a&gt;
        &lt;/h2&gt;
          &lt;p class="truncate-at-3"&gt;
            This link will take you to a page that’s not on LinkedIn
          &lt;/p&gt;
        &lt;div class="color-secondary fs-s flex items-center"&gt;
            &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic.licdn.com%2Faero-v1%2Fsc%2Fh%2Fal2o9zrvru7aqj8e1x2rzsrca" width="64" height="64"&gt;
          lnkd.in
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
    </item>
    <item>
      <title>Why AI Agents Should Check for Human Intent Before Acting</title>
      <dc:creator>Chandan Galani</dc:creator>
      <pubDate>Tue, 23 Dec 2025 05:48:18 +0000</pubDate>
      <link>https://dev.to/lola0786/why-ai-agents-should-check-for-human-intent-before-acting-1558</link>
      <guid>https://dev.to/lola0786/why-ai-agents-should-check-for-human-intent-before-acting-1558</guid>
      <description>&lt;p&gt;AI agents are getting very good at doing things.&lt;/p&gt;

&lt;p&gt;They can send messages, trigger workflows, approve steps, and automate decisions. But while building and observing agentic systems, I kept running into a quiet problem:&lt;/p&gt;

&lt;p&gt;AI agents often act without knowing whether humans actually care.&lt;/p&gt;

&lt;p&gt;The missing signal&lt;/p&gt;

&lt;p&gt;Most agent workflows answer questions like:&lt;/p&gt;

&lt;p&gt;What is the next best action?&lt;/p&gt;

&lt;p&gt;Is this action allowed by policy?&lt;/p&gt;

&lt;p&gt;Is the model confident enough?&lt;/p&gt;

&lt;p&gt;But they rarely answer:&lt;/p&gt;

&lt;p&gt;Is there real human intent or demand behind this action right now?&lt;/p&gt;

&lt;p&gt;As a result, agents can:&lt;/p&gt;

&lt;p&gt;trigger unnecessary automations&lt;/p&gt;

&lt;p&gt;send low-signal notifications&lt;/p&gt;

&lt;p&gt;act prematurely&lt;/p&gt;

&lt;p&gt;create “AI noise” instead of value&lt;/p&gt;

&lt;p&gt;This isn’t a model problem — it’s a decision gating problem.&lt;/p&gt;

&lt;p&gt;Intent vs. instruction&lt;/p&gt;

&lt;p&gt;Human intent is different from:&lt;/p&gt;

&lt;p&gt;prompts&lt;/p&gt;

&lt;p&gt;rules&lt;/p&gt;

&lt;p&gt;feedback loops&lt;/p&gt;

&lt;p&gt;Intent answers whether something should happen at all, not how it should happen.&lt;/p&gt;

&lt;p&gt;In many systems, intent is implicit or assumed:&lt;/p&gt;

&lt;p&gt;inferred from logs&lt;/p&gt;

&lt;p&gt;guessed from past behavior&lt;/p&gt;

&lt;p&gt;approximated via confidence scores&lt;/p&gt;

&lt;p&gt;But intent can also be treated as a first-class signal.&lt;/p&gt;

&lt;p&gt;A simple idea: intent-aware gating&lt;/p&gt;

&lt;p&gt;Instead of letting agents always act, we can introduce a lightweight gate:&lt;/p&gt;

&lt;p&gt;Human intent is captured or injected into the system&lt;/p&gt;

&lt;p&gt;Before acting, the agent checks for intent&lt;/p&gt;

&lt;p&gt;If intent exists → action proceeds&lt;/p&gt;

&lt;p&gt;If not → action is delayed, skipped, or downgraded&lt;/p&gt;

&lt;p&gt;This isn’t “human approval” or heavy human-in-the-loop workflows.&lt;br&gt;
It’s closer to a relevance check.&lt;/p&gt;

&lt;p&gt;Where this helps&lt;/p&gt;

&lt;p&gt;This pattern seems especially useful for:&lt;/p&gt;

&lt;p&gt;agentic automation&lt;/p&gt;

&lt;p&gt;decision escalation systems&lt;/p&gt;

&lt;p&gt;notification-heavy workflows&lt;/p&gt;

&lt;p&gt;governance or compliance-sensitive actions&lt;/p&gt;

&lt;p&gt;Anywhere an agent can technically act, but maybe shouldn’t unless humans actually care.&lt;/p&gt;

&lt;p&gt;Open questions&lt;/p&gt;

&lt;p&gt;I’m still exploring a lot here, and I’m curious how others think about this:&lt;/p&gt;

&lt;p&gt;How do you currently infer or validate human intent in your systems?&lt;/p&gt;

&lt;p&gt;Is intent something that should be explicit or inferred?&lt;/p&gt;

&lt;p&gt;Where does intent gating break down or become unnecessary?&lt;/p&gt;

&lt;p&gt;I’ve been experimenting with this idea as a small API to test the concept in practice, but the core question is architectural, not product-specific.&lt;/p&gt;

&lt;p&gt;If you’re building agentic systems or thinking about AI decision boundaries, I’d love to hear how you approach this problem.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>ux</category>
    </item>
  </channel>
</rss>
