DEV Community

Cover image for Agentic Workflows inside Google Workspace: Build a Google Docs Agent with ADK
Aryan Irani
Aryan Irani

Posted on

Agentic Workflows inside Google Workspace: Build a Google Docs Agent with ADK

Welcome ! 

Google Workspace is where work happens. From drafting reports in Docs to crunching data in Sheets and collaborating in Gmail and Meet - it's the daily home for millions of teams. It's where ideas start, decisions are documented, and collaboration happens in real time.

Now imagine if your Docs, Sheets, and Gmail weren't just tools, but collaborators.

Thanks to Google's Agent Development Kit (ADK) and Vertex AI's Agent Engine, that's no longer just an idea. These frameworks let you build intelligent agents, deploy them at scale, and integrate them seamlessly into your Google Workspace tools - enabling a new era of agentic productivity. 

In this three-part tutorial series, we'll explore exactly how to do that. 
Series Roadmap: 
 - Part 1: Build Your First ADK Agent (You're here!) 
 - Part 2: Deploy an ADK Agent to Vertex AI Agent Engine
 - Part 3: Integrate into Google Docs 

Let's get started and bring intelligent, agentic workflows to the tools you already use every day.

This tutorial builds upon ideas introduced in Google's official Fact-Check sample for Apps Script. If you're looking for a quick overview of the official implementation, that's a great place to start. 

How It Works

Building the ADK-Agent

Let'a create the ADK agent - AI based Auditor that fact checks text, inspired by Google's Fact-Check custom function sample. Unlike the sample's single-step approach, our agent uses multi-step reasoning to extract claims, verify them with google_search, and output structured JSON.

Step 1: Set Up Your Project Environment

First, prepare your development environment to use the Google Agent Development Kit (ADK). Create a project, install dependencies, and authenticate with Google Cloud.

1.Create a folder and navigate to it:

mkdir workspace-agent
cd workspace-agent
Enter fullscreen mode Exit fullscreen mode

2.Create & activate virtual environment:

`python -m venv .venv
source .venv/bin/activate`
Enter fullscreen mode Exit fullscreen mode

3.Install Google ADK:

`pip install google-adk`
Enter fullscreen mode Exit fullscreen mode

4.Authenticate with Google Cloud for local testing:

`gcloud auth application-default login`
Enter fullscreen mode Exit fullscreen mode

Check out the link given below for information on how to get started with Google Cloud SDK.

Step 2: Design the Agent's Logic

Now that we have our environment setup, we will now write the agent's code in agent.py . An ADK agent has three main components: a model, tools and instruction set to define its behaviour. Our AI-Auditor agent extracts factual claims from text, verifies them with Google Search, and returns a JSON report. 

You can work with the out the agent code by checking out the link given below.




from google.adk.agents import Agent
from google.adk.tools import google_search
Enter fullscreen mode Exit fullscreen mode

We start off by setting up our initial python script with the necessary imports.

root_agent = Agent(
    name='ai_auditor',
    model='gemini-2.0-flash-001',
    description="Fact-checks statements from a document and provides citations.",
    instruction="""
You are an AI Auditor specialized in factual verification and evidence-based reasoning.
Your goal is to analyze text from a Google Doc, identify verifiable factual claims, and produce a concise, source-backed audit report.

### 🔍 TASK FLOW

1. **Extract Claims**
   - Analyze the input text and identify factual claims that can be objectively verified.
   - A factual claim is any statement that can be proven true or false with external evidence.
   - Skip opinions, vague generalizations, or speculative language.
   - List each claim as a string in a JSON array.

2. **Verify Claims**
   - For each extracted claim:
     - Use the `google_search` tool to find relevant, credible results.
     - Evaluate at least the top 3 relevant URLs to determine the claim's accuracy.
     - Cross-check multiple sources when possible to ensure confidence.

3. **Classify Findings**
   - For each claim, determine one of the following verdicts:
     - ✅ **True:** Supported by multiple reputable sources.
     - ⚠️ **Misleading / Partially True:** Contains partially correct or context-dependent information.
     - ❌ **False:** Contradicted by credible evidence.
     - ❓ **Unverifiable:** Insufficient information to confirm or deny.
   - Provide a **confidence score (0–100)** reflecting the strength of evidence.

4. **Record Evidence**
   - For each claim, include:
     - The **verdict**
     - **Reasoning summary** (1–2 sentences)
     - **List of citation URLs** used for verification

5. **Summarize Results**
   - Compile a final report including:
     - Total number of claims analyzed
     - Distribution of verdicts (True / False / Misleading / Unverifiable)
     - Brief overall conclusion (e.g., "Most claims are accurate but some lack supporting evidence.")

### 🧾 OUTPUT FORMAT

Return your final response in structured JSON format as follows:

{
  "claims": [
    {
      "claim": "...",
      "verdict": "True | False | Misleading | Unverifiable",
      "confidence": 0-100,
      "reasoning": "...",
      "sources": ["https://...", "https://..."]
    }
  ],
  "summary": {
    "total_claims": X,
    "verdict_breakdown": {
      "True": X,
      "False": X,
      "Misleading": X,
      "Unverifiable": X
    },
    "overall_summary": "..."
  }
}

### 🧠 ADDITIONAL INSTRUCTIONS
- Always prefer authoritative domains (.gov, .edu, .org, or major media).
- Avoid low-quality or user-generated content as primary sources.
- Be concise, accurate, and transparent about uncertainty.
    """,
    tools=[google_search],  # Only use the search tool
)
Enter fullscreen mode Exit fullscreen mode

We then define the AI agent by giving it a name, followed by the model, description and a very detailed instruction set. The instruction set it the most important part that defines how exactly the agent should think and operate. This structure mirrors how professional fact-checkers work - turning the AI into an autonomous auditing pipeline.

Once done with this agent declaration, we move to giving the agent access to real-world information via the Google Search tool. Instead of relying on pre-trained data, the agent can perform live searches, evaluate results, and provide up-to-date citations.

That's what makes this system agentic - the model doesn't just generate answers, it takes action (using tools) to verify information. 

Step 3: Test the Agent Locally

Before deploying the agent, we must test the agent locally to ensure it works. You can either generate sample scripts for testing to very the agent with sample input and verify the output, or you can test the agent using adk web command.

Our first interaction is fundamental: understanding what the agent can actually do. This tests the agent's ability to summarize its description and instruction_text.

The agent provides a concise summary of its abilities, drawing from its description and instruction set.

We then provide it two statements and it successfully audits the contents and give me back a well structure JSON response.

Under the hood:

  • The google_search tool fetches relevant pages from the web.
  • The Gemini 2.0 Flash model parses those snippets and classifies truthfulness.
  • The ADK handles reasoning orchestration and ensures step-by-step logic is followed.

This local test phase helps validate your agent's workflow before you deploy it on Vertex AI Agent Engine, where it can be connected to Google Workspace tools like Docs or Sheets.

Conclusion and Teaser

Congratulations ! You've built an AI Auditor Agent that fact-checks text with google_search, producing structured JSON responses. This pattern - defining a model, tool and instruction - can power any task, from verifying doc claims to analysing sheet data. By automating manual tasks and leveraging credible sources, you have unlocked smarter productivity. 

Try it now: Run your agent with a fun input like The moon is made of cheese and check out the JSON response returned by the agent. 

In Part 2, we'll deploy the agent to Vertex AI Agent Engine, making it cloud powered and scalable for production use. Stay tuned, check out the Github repo for all the code.

https://github.com/aryanirani123/Agent-Development-Kit

Feel free to reach out if you have any issues/feedback at aryanirani123@gmail.com.

Top comments (0)