DEV Community

Cover image for Using the Loop Pattern to Make My Multi-Agent Solution More Robust (with Google ADK)
Darren "Dazbo" Lester for Google Developer Experts

Posted on • Originally published at Medium on

Using the Loop Pattern to Make My Multi-Agent Solution More Robust (with Google ADK)

In a previous series of articles, I introduced LLMs-Generator: a multi-agent solution built with the Google Agent Development Kit (ADK) to create llms.txt files.

As a recap:

  • I first covered the overall goals, in terms of what llms.txt is, how it can help us, and how I want to provide my agent with the power to use it.
  • In the second part I covered the multi-agent solution design.
  • And in the third part, I provided a detailed walkthrough of the solution implementation.

Alas, after putting the solution to the test with fairly large documentation repositories like ADK Docs, I encountered a reliability problem. Many of the generated file summaries came back with "No summary available."

But why? What can I do to fix it?

This article explores the challenges I faced, the solution I found in agentic design patterns, and the implementation of a more robust, scalable, and reliable multi-agent system using a Loop agent.

The Challenge: Hitting the Context Limit

The initial design was simple: a sequential agent that would:

  1. Read all the files in a repository.
  2. Pass the entire content of all the files to a summariser agent in one hit.

The design looked like this:

LLMS-Generator Multi-Agent Application — First Phase

The Document Summariser Agent is itself a SequentialAgent. First it reads the content of ALL the files into the context, and then it runs the content_summariser_agent to actually generate all the summaries in one hit.

The content_summariser_agent itself had a complex and lengthy prompt, asking it to do two main things:

  1. Summarise all the files that have been read.
  2. Then generate a summary of the entire project itself.

So what’s the problem?

Well, the combined content of all files in a large repository can easily exceed the context window of even the most powerful LLMs like Gemini 2.5 Pro. When the input is too large, the model simply cannot “see” all the data, leading to incomplete or failed summaries. This was the root cause of the "No summary available" messages.

My Solution: The Loop Agent Design Pattern

I was reading the Google document Choose a design pattern for your agentic AI system which outlines common patterns for building complex agentic systems. The Loop pattern stood out as the perfect solution for my problem.

As Google describes it:

The multi-agent loop agent pattern repeatedly executes a sequence of specialized subagents until a specific termination condition is met. This pattern uses a loop workflow agent that, like other workflow agents, operates on predefined logic without consulting an AI model for orchestration. After all of the subagents complete their tasks, the loop agent evaluates whether an exit condition is met.
Use the loop pattern for tasks that require iterative refinement or self-correction, such as generating content and having a critic agent review it until it meets a quality standard.

The pattern looks like this like:

Loop Pattern for Multi-Agent Applications

Instead of processing all the files in one big hit, I can split the task into batches and process the batches iteratively, until all the summaries have been generated. After we have all the summaries, we can simply aggregate them.

The New Design

There’s more agents and more tools! But the agents themselves are simpler and the division of labour is much more sensible.

The New Design Diagram

1. The Coordinator Agent

The generate_llms_coordinator (in the top level agent.py) is largely unchanged. It uses the same discover_files to find all the files in the repo, and the generate_llms_txt tool to generate the final llms.txt file. And it leverages the Agent-as-a-Tool pattern to wrap the document_summariser_agent and use it as a tool, i.e.

tools=[
    discover_files, # automatically wrapped as FunctionTool
    generate_llms_txt, # automatically wrapped as FunctionTool
    AgentTool(agent=document_summariser_agent)
],
Enter fullscreen mode Exit fullscreen mode

The remaining agents sit under src/llms_gen_agent/sub_agents/doc_summariser/agent.py.

2. The Document Summariser Agent

As before, this is a SequentialAgent. But rather than just reading all the files in one hit and then summarising them, it has four agents that must run sequentially:

# This is the main document summarizer agent, orchestrating the entire process.
document_summariser_agent = SequentialAgent(
    name="document_summariser_agent",
    description="Orchestrates the entire file summarization process including batching and looping.",
    sub_agents=[
        batch_creation_agent,  # Step 1: Create batches of files
        batch_processing_loop, # Step 2: Process each batch in a loop
        project_summariser_agent, # Step 3: Generate overall project summary
        final_summary_agent    # Step 4: Finalize and combine all summaries
    ]
)
Enter fullscreen mode Exit fullscreen mode

3. The Batch Creation Agent

Here we’re just using an agent to wrap a tool. Our agent takes the list of all the file paths and uses a new create_file_batches tool to split them into small batches. E.g. 10 files per batch.

# This agent is responsible for initially splitting all discovered files into batches.
batch_creation_agent = Agent(
    name="batch_creation_agent",
    description="Creates batches of files.",
    model=Gemini(
        model=config.model,
        retry_options=retry_options
    ),
    instruction=f"""You MUST call the `create_file_batches` tool with a `batch_size` of {config.batch_size}. This is your ONLY task. The `create_file_batches` tool will read the 'files' from the session state, create batches, and store them in the 'batches' session state key. Do NOT respond with anything else. Just call the tool.""",
    tools=[create_file_batches]
)
Enter fullscreen mode Exit fullscreen mode

But why are we using an agent to wrap a tool? Can’t we just supply the tool directly to the document_summariser_agent? The answer is: no. Workflow agents (like SequentialAgent) are designed to work with multiple agents, but they do not directly manage or orchestrate tools.

The tool itself is just a simple Python function that retrieves all the file paths from the session state, and then splits the files into a number of batches, depending on a configurable batch size. These batches are stored in the agent’s session state, i.e. in tool_context.state["batches"].

4. The Batch Processing Loop

This LoopAgent is the heart of the new design. It iterates through the batches until they are all processed.

# This LoopAgent iteratively processes each batch of files until all are summarized.
batch_processing_loop = LoopAgent(
    name="batch_processing_loop",
    description="Processes all file batches in a loop.",
    sub_agents=[
        batch_selector_agent, # Gets next batch or exits
        single_batch_processor
    ],
    max_iterations=200 # A safeguard against infinite loops
)
Enter fullscreen mode Exit fullscreen mode

With each iteration it calls two sub-agents:

  • The batch_selector_agent runs at the beginning of each loop and checks for remaining batches. It does this by running a single tool: process_batch_selection. This tool pops the next batch off the batches list stored in session state. It assigns this batch to a state variable called current_batch. But if there are no more batches to process (because batches is empty), then we call tool_context.actions.escalate=True. This is a specific mechanism that tells the LoopAgent to terminate. I.e. it’s our exit condition for the loop.
  • The single_batch_processor agent is also a SequentialAgent. Let’s look at it in more detail.

5. The Single Batch Processor Agent

The single_batch_processor runs three sub-agents in sequence. It looks like this:

# This agent will process one batch sequentially
single_batch_processor = SequentialAgent(
    name="single_batch_processor",
    description="Reads and summarizes one batch of files.",
    sub_agents=[
        file_reader_agent,      # Reads files from 'current_batch'
        content_summariser_agent, # Summarises files from 'current_batch'
        update_summaries_agent  # Appends batch summaries to 'all_summaries'
    ]
)
Enter fullscreen mode Exit fullscreen mode
  • The File Reader Agent is largely unchanged from before. It reads the contents of all the supplied files. But this time the implementation looks at the current_batch in the session state, rather than all the files.
  • The Content Summariser Agent works much like it did in the previous implementation. It creates summaries for all the files in the batch. The prompt for the content_summariser_agent is now much simpler and more focused. Instead of asking for everything at once, it’s only asked to summarise the files in the current batch:
You are an expert summariser. Your task is to summarise EACH individual file's content in no more than four sentences.
The summary should reference any key concepts, classes, best practices, etc.

- Do NOT start summaries with text like "This document is about..." or "This page introduces..." Just immediately describe the content.
E.g.
- Rather than this: "This document explains how to configure streaming behavior..." Say this: "Explains how to configure streaming behavior..."
- Rather than this: "This page introduces an agentic framework for..." Say this: "Introduces an agentic framework for..."
- If you cannot generate a meaningful summary, use 'No meaningful summary available' as its summary.

The final output MUST be a JSON object with a single top-level key called 'batch_summaries', which contains a dictionary of file paths to summaries.
Example:
{"batch_summaries": {"/path/to/file1.md":"Summary of file 1.", "/path/to/file2.md":"Summary of file 2."}}

IMPORTANT: Your final response MUST contain ONLY this JSON object. DO NOT include any other text, explanations, or markdown code block delimiters.

FILE CONTENTS START:
{files_content}
---
FILE CONTENTS END:
Now return the JSON object.
Enter fullscreen mode Exit fullscreen mode
  • The Update Summaries Agent takes the summaries from the processed batch and appends them to a master list of summaries in the session state.

6. The Project Summariser Agent

Once the LoopAgent has finished iterating, we move on to the project_summariser_agent. Its job is to review the complete list of individual file summaries and the project’s README.md (if it exists) to generate a final, high-level project summary.

# Agent to create the final project summary after the loop
project_summariser_agent = Agent(
    name="project_summariser_agent",
    description="Creates the final project summary from all file summaries.",
    model=Gemini(
        model=config.model,
        retry_options=retry_options
    ),
    instruction="""Read the content of the project's README.md file (if available in session state as 'readme_content'). Then, review the 'all_summaries' from the session state. Generate a two-paragraph summary of the entire project based on these inputs. The output should be a JSON object with a single key 'project_summary' containing the generated summary.""",
    tools=[read_files], # To read the README
    output_schema=ProjectSummaryOutput,
    output_key="project_summary_raw",
    after_model_callback=clean_json_callback # Apply callback here
)
Enter fullscreen mode Exit fullscreen mode

We once again use the clean_json_callback to ensure the resulting output doesn’t have any additional preamble or markup.

7. The Final Summary Agent

This final agent takes all the individual summaries and the project summary and formats them into the required doc_summaries format. It attaches these summaries to the session state.

# This agent combines all collected summaries and the project summary into the final output.
final_summary_agent = Agent(
    name="final_summary_agent",
    description="Finalizes the document summaries by combining all individual and project summaries.",
    model=Gemini(
        model=config.model,
        retry_options=retry_options
    ),
    instruction="""Call the `finalize_summaries` tool to combine all collected summaries and the project summary into the final output format.""",
    tools=[finalize_summaries]
)
Enter fullscreen mode Exit fullscreen mode

Finally…

Control returns to our orchestrator, which uses a tool to create the llms.txt from the aggregated summaries.

But Why Use Batches At All?

Do we even need batches? We could instead use the loop pattern, but apply it document-by-document.

But there are cons to this approach:

  1. It is not very efficient. Every call to the Gemini model has overhead and latency.
  2. Making calls per file rather than per batch significantly increases the number of API calls, which can cause us to hit API rate limits.

So aggregating the files into batches allows us to reduce the number of API calls, avoid rate limiting, reduce overall cost, and improve the overall performance of the application.

Does It Work?

Of course it does! The log output (at DEBUG level) looks like this:

Creating batches

And as we progress towards the end:

Progressing

After all the batches have completed, the application moves on to summarising the repo itself:

Project summary generation

And finally:

LLMS.txt generated

Hurrah!

Conclusions

The initial version of LLMS-Generator was an okay start, but it wasn’t robust enough for real-world use with large documentation repos. By identifying the root cause of the problem — context window limits — and applying the appropriate agentic design pattern, I was able to create a much more reliable and scalable solution.

The Loop pattern — using the LoopAgent from the ADK — is a good fit for this use case. This experience was a great reminder that when building complex systems — AI or otherwise — choosing the right architecture is fundamental!

Additional Thoughts

Adding the loop pattern was a solid move to make this solution more dependable. But it's probably not the best pattern. A better approach would probably be some sort of "generate-evaluate" loop, only exiting the loop when our final llms.txt meets our quality requirements.

Also, this solution is not particularly efficient for context management. In a future blog, I'll show you how we can make use of ADK artifacts to significantly reduce the amount of context window we're consuming, and to more reliably pass data between our agents.

You Know What To Do!

  • Please share this with anyone that you think will be interested. It might help them, and it really helps me!
  • Please give me claps! (Just hold down the clap button.)
  • Feel free to leave a comment 💬.
  • Follow and subscribe, so you don’t miss my content.

Useful Links and References

Please Add a Star to My Repo

Google Cloud ADK

Multi-Agent Design

Llms.Txt

Top comments (0)