DEV Community

Ashutosh Rana
Ashutosh Rana

Posted on

Google ADK Has a Compliance Gap — Here's How to Close It

Google's Agent Development Kit (ADK) makes it remarkably easy to build multi-agent AI systems. You can wire up an orchestrator agent, connect it to specialized sub-agents, and have a working pipeline in under 100 lines of Python.

What it does not give you — at least not yet — is a compliance layer.

In regulated industries, that gap is the difference between a production deployment and a liability.

What ADK Gives You

ADK provides a clean callback architecture:

  • before_model_callback — intercept before the LLM sees the prompt
  • before_agent_callback — intercept at agent invocation
  • before_tool_callback — intercept before any tool executes
  • after_model_callback — intercept after the LLM responds

These hooks exist precisely for this kind of instrumentation. The framework is well-designed. The gap is not architectural — it is that there is no reference implementation for compliance enforcement using these hooks.

Why This Matters for Regulated Deployments

Consider three real scenarios:

Higher Education (FERPA)
An admissions agent handles student data. FERPA requires that every disclosure of student education records be logged (34 CFR § 99.32) and that access be limited to legitimate educational interest (34 CFR § 99.31). Without a compliance layer, an ADK agent has no mechanism to enforce or record either requirement.

Healthcare (HIPAA)
An intake triage agent processes patient queries. HIPAA requires that PHI (Protected Health Information) only be accessed by authorized workforce members under a BAA (Business Associate Agreement). An ADK agent without a compliance hook cannot verify BAA status or create the audit trail required by 45 CFR § 164.312.

Enterprise AI (OWASP Agentic AI Top 10 2026)
OWASP's 2026 Agentic AI Top 10 identifies privilege escalation (ASI02), insufficient audit logging (ASI06), and uncontrolled resource consumption (ASI08) as the top risks in multi-agent systems. An ADK orchestrator that spawns sub-agents without privilege boundaries is exposed to all three.

The ADKPolicyGuard Pattern

I built ADKPolicyGuard in the regulated-ai-governance package to provide a drop-in compliance layer for ADK agents.

from regulated_ai_governance.adapters.google_adk_adapter import (
    ADKPolicyGuard,
    BigQueryAuditSink,
    Regulation,
)
from google.adk.agents import LlmAgent

# Define policy — FERPA + OWASP Agentic Top 10
guard = ADKPolicyGuard(
    regulations=[Regulation.FERPA, Regulation.OWASP_AGENTIC_TOP10],
    audit_sink=BigQueryAuditSink(
        project_id="your-gcp-project",
        dataset_id="compliance_audit",
        table_id="adk_disclosures",
    ),
    rate_limit_rpm=60,
)

# Wire into your ADK agent via callbacks
agent = LlmAgent(
    name="student_advisor",
    model="gemini-2.0-flash",
    before_agent_callback=guard.before_agent_callback,
    before_model_callback=guard.before_model_callback,
    before_tool_callback=guard.before_tool_callback,
)
Enter fullscreen mode Exit fullscreen mode

Every agent invocation is now covered:

  • Before agent starts: identity scope is validated, rate limit is checked
  • Before model call: prompt is screened for policy violations (OWASP LLM01 — prompt injection)
  • Before tool executes: tool permissions are validated against the authorized role
  • All events: written to BigQuery as structured audit records

Multi-Agent Orchestration

The real value shows up in multi-agent systems. In an Orchestrator → LeadAgent → ApplicantAgent architecture, each agent hand-off is a potential privilege escalation point. ADKPolicyGuard enforces that sub-agents cannot exceed the privilege scope of the orchestrator:

from google.adk.agents import SequentialAgent

orchestrator = SequentialAgent(
    name="admissions_orchestrator",
    sub_agents=[lead_agent, applicant_agent],
    before_agent_callback=guard.before_agent_callback,
)
Enter fullscreen mode Exit fullscreen mode

The guard's before_agent_callback validates each sub-agent invocation against the original identity scope. A sub-agent cannot access data the orchestrator was not authorized to access — privilege escalation is structurally prevented.

The Audit Record

Every agent interaction produces a structured compliance record:

{
  "event_id": "adk-20260418-001",
  "agent_name": "student_advisor",
  "regulation": "FERPA",
  "decision": "ALLOWED",
  "identity": {"user_id": "stu_001", "role": "student"},
  "tool_calls": ["get_transcript", "check_financial_aid"],
  "timestamp": "2026-04-18T10:30:00Z",
  "rate_limit_remaining": 58,
  "owasp_checks": {
    "ASI01_prompt_injection": "PASS",
    "ASI02_privilege_escalation": "PASS",
    "ASI06_audit_logging": "PASS"
  }
}
Enter fullscreen mode Exit fullscreen mode

This record goes directly to BigQuery for compliance reporting, incident investigation, and regulatory audit response.

What This Does Not Replace

ADKPolicyGuard is a compliance enforcement layer, not an authentication system. Your application must establish the authenticated identity context before the agent runs. The guard enforces the scope; your auth layer establishes it.

It also does not replace your legal counsel's review of how your specific deployment maps to applicable regulations.

Getting Started

pip install regulated-ai-governance
Enter fullscreen mode Exit fullscreen mode
from regulated_ai_governance.adapters.google_adk_adapter import ADKPolicyGuard, Regulation
Enter fullscreen mode Exit fullscreen mode

If you are building ADK agents for healthcare, education, or any regulated environment and want to discuss the compliance architecture, open an issue or connect with me directly.


Top comments (0)