DEV Community

Cover image for Autonomous Agents Visiting Data
Patricia Buendia
Patricia Buendia

Posted on • Originally published at fairlyz.lifetimeomics.com

Autonomous Agents Visiting Data

This is a submission for the Google AI Agents Writing Challenge: [Learning Reflections]

This article was originally published on FAIRLYZ Knowledge Base.


Autonomous Agents Visiting Data

Nov 13, 2025 — by Patricia Buendia in AI, AI Governance, Data Security

The Google AI Agents Intensive Course (First 5DGAI) debuted in March 2025 and returned in November 2025 (second 5DGAI), offering developers a front-row seat to the rapid evolution of agentic systems.

The First 5DGAI course focused on foundational skills: writing prompts, training agents, customizing them using Retrieval-Augmented Generation (RAG), and deploying them via MLOps. Participants learned to fine-tune models and integrate external knowledge sources to improve agent performance.

By November, the second 5DGAI curriculum had advanced significantly. Developers were trained to build autonomous agents capable of managing other agents and tools and deploying them via Agent Ops (ADK, Vertex AI, Kubernetes), a shift that reflects the growing complexity of real-world AI deployments.

The November 2025 Introduction to Agents White Paper (link) introduced a more formalized framework for understanding these systems, especially in the section titled Taxonomy of Agentic Systems (pages 14–18).


Taxonomy of Agentic Systems

The taxonomy outlined in the November 2025 white paper breaks down agentic systems into five key levels:

  • Level 0: Core Reasoning System

    A standalone language model that relies solely on its pre-trained knowledge.

    (e.g., ChatGPT-3 from 2022–2023)

  • Level 1: Connected Problem-Solver

    Gains tool access to fetch real-time data and interact with external systems (e.g., APIs, RAG).

    (e.g., ChatGPT-4 in 2023, with plugins, browsing, code interpreter)

  • Level 2: Strategic Problem-Solver

    Introduces context engineering — multi-step planning, curated information, complex missions.

    (e.g., Gemini 1.5 Pro, GPT-4 Turbo in 2024 with memory and tool chaining)

  • Level 3: Collaborative Multi-Agent System

    Agents delegate to specialized sub-agents; scalable, parallel workflows.

    (e.g., Google DeepMind multi-agent demos, OpenAI Dev Day agent frameworks, late 2024–2025)

  • Level 4: Self-Evolving System

    Agents autonomously create new agents/tools to fill capability gaps.

    (No verified examples in 2025)


Agents use in data visitation and security concerns

As agents gain access to sensitive data and tools, security becomes central. During the Day 2 live-stream, Alex Wissner-Gross highlighted the risks and proposed a vision:

“I foresee an internet of agents, with one singleton agent per corporation who shares secrets with sub-agents but does not expose them to the outside.”

The white paper warns that tool access and autonomy introduce a delicate balance between utility and risk, especially when agents operate across organizational boundaries.

Recommended mitigation strategies

  • Role-based access control for agents
  • Audit trails for tool invocation and data access
  • Memory partitioning to prevent leakage across tasks
  • Prompt injection defenses via adversarial training and specialized security analyst agents

Final thoughts

The Google AI Agents Intensive Course reflects technical progress and surfaces ethical and operational challenges in deploying autonomous systems. As we move toward an internet of agents, frameworks like the Taxonomy of Agentic Systems and security models proposed by experts will be critical.


Tags: Autonomous Agents, Internet-of-agents (IoA), Multi-Agent System, Secrets, Self-Evolving System

Top comments (0)