AI agents are powerful, but they become truly effective when they’re broken down into smaller, specialized parts. This post introduces the idea of subagents, how they work, why they matter, and how treating them like tools can make your systems more modular, scalable, and easier to debug.
What are SubAgents?
- Subagents are smaller, specialized AI agents that handle specific tasks (e.g., search, coding, data analysis) within a larger agent system.
- The main agent delegates work to these subagents, letting each one focus on what it does best.
- Using subagents as tools means treating each subagent like a callable function with a defined input/output schema.
- They improve accuracy because each subagent focuses on a narrow responsibility instead of trying to do everything.
- They enable scalability, since you can add, remove, or upgrade individual subagents without changing the whole system.
- They also make debugging and monitoring easier by isolating where errors or failures happen.
Differences SubAgents vs SubAgents-As-Tools
-
SubAgents are often treated as independent entities that share a common workspace.
- Communication: They might talk to each other or a manager agent via a shared state. With LangGraph, we can define shared state between agents.
- Autonomy: The manager says, "Research this startup," and the sub-agents decide when and how to report back. They may even initiate conversations with one another without the manager's direct intervention.
- Flow: Usually managed by a Graph or State Machine where the "turn-taking" is governed by logic.
-
SubAgent-As-Tool: The SubAgent is wrapped inside a tool definition.
- Communication: The main agent sees the sub-agent as a black box. It passes an input and waits for a specific output.
- Autonomy: The SubAgent has zero autonomy. It only speaks when spoken to. It cannot volunteer information; it is explicitly called.
- Flow: The main agent’s execution pauses until the tool (subAgent) returns its finding.
Today, in this post, we're exploring the SubAgents-as-Tools. Whether you're exploring agent design or building your own system, this will give you a clear, practical starting point 😉
Table of Contents
- Dependencies & Configuration
- Main Agent, Make LLM with AWS Bedrock, Nova
- SubAgents
- SubAgents Own Tools
- Langfuse Handler
- All Code & Demo
- Conclusion
- References
Dependencies & Configuration
- Please install dependencies:
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
# deactivate
Requirements.txt:
langchain>=1.0.0
langchain-aws>=1.2.0
langgraph>=1.0.0
python-dotenv>=1.0.0
boto3>=1.34.0
langfuse>=4.0.0
-
Enable AWS Bedrock model access in your region (e.g. eu-central-1, us-east-1)
AWS Bedrock > Bedrock Configuration > Model Access > AWS Nova-Pro, or Claude 3.7 Sonnet
In this code, we'll use
AWS Nova-Pro, because it's served in different regions by AWS.After model access, give permission in your IAM to access AWS Bedrock services:
AmazonBedrockFullAccess-
2 Options to reach AWS Bedrock Model using your AWS account:
-
AWS Config: With
aws configure, to createconfigandcredentialsfiles - Getting variables using .env file: Add .env file:
-
AWS Config: With
AWS_ACCESS_KEY_ID= PASTE_YOUR_ACCESS_KEY_ID_HERE
AWS_SECRET_ACCESS_KEY=PASTE_YOUR_SECRET_ACCESS_KEY_HERE
Main Agent, Make Agent with AWS Bedrock, Nova
Langchain agent with Bedrock:
from langchain_aws import ChatBedrockConverse
from langchain.agents import create_agent
def make_llm():
return ChatBedrockConverse(model="us.amazon.nova-pro-v1:0", temperature=0.3)
main_agent = create_agent(
model=make_llm(),
tools=[call_market_agent, call_financial_agent, call_risk_agent],
system_prompt=(
"You are a senior startup advisor. When asked to evaluate a startup idea, "
"delegate to all three specialist subagents (market research, financial analysis, risk assessment), "
"then synthesise their findings into a clear, structured investment brief with a final verdict."
),
)
def evaluate(idea: str) -> None:
print(f"\n{'═'*65}")
print(f"IDEA: {idea}")
print(f"{'═'*65}")
result = main_agent.invoke({
"messages": [{"role": "user", "content": f"Evaluate this startup idea: {idea}"}]
},
{
"callbacks": [langfuse_handler] # if you don't want to use langfuse, remove callbacks and handler
}
)
print(result["messages"][-1].content)
Main agents calls sub-agents as tools:
main_agent = create_agent(
...
tools=[call_market_agent, call_financial_agent, call_risk_agent],
...
)
SubAgents
SubAgents are defined, also defined call subagents functions to be enable to call subagents-as-tools:
# Subagents
market_subagent = create_agent(
model=make_llm(),
tools=[search_market_data, find_competitors],
system_prompt="You are a market research specialist. Analyse market opportunity and competition. Be concise and data-driven.",
)
financial_subagent = create_agent(
model=make_llm(),
tools=[estimate_costs],
system_prompt="You are a startup financial analyst. Estimate costs, revenue potential, and break-even timelines. Be specific with numbers.",
)
risk_subagent = create_agent(
model=make_llm(),
tools=[assess_risk],
system_prompt="You are a startup risk advisor. Identify the top risks and suggest concrete mitigation strategies.",
)
# Subagents as tools for the main agent
@tool("market_research", description="Research market size, trends, and competitors for a startup idea.")
def call_market_agent(query: str) -> str:
result = market_subagent.invoke({
"messages": [{"role": "user", "content": query}]
}
)
return result["messages"][-1].content
@tool("financial_analysis", description="Estimate costs, revenue, and financial viability of a startup idea.")
def call_financial_agent(query: str) -> str:
result = financial_subagent.invoke({
"messages": [{"role": "user", "content": query}]
}
)
return result["messages"][-1].content
@tool("risk_assessment", description="Identify key risks and mitigation strategies for a startup idea.")
def call_risk_agent(query: str) -> str:
result = risk_subagent.invoke({
"messages": [{"role": "user", "content": query}]
}
)
return result["messages"][-1].content
SubAgents Own Tools
SubAgents calls their own tools to handle tasks.
# Subagent tools
@tool
def search_market_data(sector: str) -> str:
"""Search market size and growth data for a sector."""
data = {
"meal planning": "Global meal-kit market: $20B (2024), CAGR 13%. Health-conscious segment growing fastest.",
"ai health": "AI in healthcare: $45B by 2026. Personalisation is top driver.",
"food tech": "FoodTech VC funding: $8B in 2023. Subscription models dominate.",
}
return next((v for k, v in data.items() if k in sector.lower()), "No specific data found.")
@tool
def find_competitors(niche: str) -> str:
"""Find key competitors in a niche."""
comps = {
"meal planner": "Noom ($400M revenue), MyFitnessPal (150M users), Whisk (acquired by Samsung). Gap: no fully AI-personalised option.",
"ai nutrition": "Nutrino, Suggestic — both B2B focused. Consumer gap exists.",
}
return next((v for k, v in comps.items() if k in niche.lower()), "Competitor data not found.")
@tool
def estimate_costs(product_type: str) -> str:
"""Estimate development and operational costs."""
costs = {
"app": "MVP: $80–150K. Monthly ops (infra + support): $15K. LLM API costs: $0.02/user/day.",
"saas": "MVP: $100–200K. Monthly ops: $20K. Customer acquisition: $30–80 per user.",
"subscription": "Churn benchmark: 5–8%/month. LTV target: >3x CAC.",
}
return next((v for k, v in costs.items() if k in product_type.lower()), "Cost estimate not available.")
@tool
def assess_risk(area: str) -> str:
"""Assess risks in a specific area."""
risks = {
"regulatory": "FDA oversight if medical claims made. GDPR for EU users. Avoid 'diagnosis' language.",
"competition":"Big Tech (Google, Apple) could replicate. Moat needed: proprietary data or partnerships.",
"retention": "Meal planning has high churn. Gamification and social features improve D30 retention by 40%.",
"funding": "Seed rounds averaging $1.5M for consumer health apps. Strong traction needed before Series A.",
}
return next((v for k, v in risks.items() if k in area.lower()), "Risk data not found.")
Langfuse Handler
- Langfuse helps by providing detailed tracing, logging, and evaluation of LLM interactions, making it easier to debug and improve agent behavior.
- It also enables monitoring of performance, costs, and user flows, which is critical for optimizing and maintaining reliable AI applications in production.
from langfuse import Langfuse
from langfuse.langchain import CallbackHandler
langfuse = Langfuse(
public_key = os.getenv("LANGFUSE_PUBLIC_KEY"),
secret_key = os.getenv("LANGFUSE_SECRET_KEY"),
host = os.getenv("LANGFUSE_BASE_URL"),
)
langfuse_handler = CallbackHandler()
result = main_agent.invoke({
"messages": [{"role": "user", "content": f"Evaluate this startup idea: {idea}"}]
},
{
"callbacks": [langfuse_handler] # if you don't want to use langfuse, remove callbacks and handler
}
)
All Code & Demo
GitHub Link: Project on GitHub
Agent App:
import os
from langchain_aws import ChatBedrockConverse
from langchain_core.tools import tool
from langchain.agents import create_agent
from langfuse import Langfuse
from langfuse.langchain import CallbackHandler
from dotenv import load_dotenv
load_dotenv()
"""
A main orchestrator agent delegates to 3 specialist subagents:
main_agent
- market_research_agent - analyses market size, trends, competitors
- financial_agent - estimates costs, revenue, break-even
- risk_agent - identifies risks and mitigation strategies
User asks: "Evaluate my startup idea: an AI-powered meal planner"
Main agent calls all 3 subagents, then synthesises a final report.
"""
langfuse = Langfuse(
public_key = os.getenv("LANGFUSE_PUBLIC_KEY"),
secret_key = os.getenv("LANGFUSE_SECRET_KEY"),
host = os.getenv("LANGFUSE_BASE_URL"),
)
langfuse_handler = CallbackHandler()
def make_llm():
return ChatBedrockConverse(model="us.amazon.nova-pro-v1:0", temperature=0.3)
# Subagent tools
@tool
def search_market_data(sector: str) -> str:
"""Search market size and growth data for a sector."""
data = {
"meal planning": "Global meal-kit market: $20B (2024), CAGR 13%. Health-conscious segment growing fastest.",
"ai health": "AI in healthcare: $45B by 2026. Personalisation is top driver.",
"food tech": "FoodTech VC funding: $8B in 2023. Subscription models dominate.",
}
return next((v for k, v in data.items() if k in sector.lower()), "No specific data found.")
@tool
def find_competitors(niche: str) -> str:
"""Find key competitors in a niche."""
comps = {
"meal planner": "Noom ($400M revenue), MyFitnessPal (150M users), Whisk (acquired by Samsung). Gap: no fully AI-personalised option.",
"ai nutrition": "Nutrino, Suggestic — both B2B focused. Consumer gap exists.",
}
return next((v for k, v in comps.items() if k in niche.lower()), "Competitor data not found.")
@tool
def estimate_costs(product_type: str) -> str:
"""Estimate development and operational costs."""
costs = {
"app": "MVP: $80–150K. Monthly ops (infra + support): $15K. LLM API costs: $0.02/user/day.",
"saas": "MVP: $100–200K. Monthly ops: $20K. Customer acquisition: $30–80 per user.",
"subscription": "Churn benchmark: 5–8%/month. LTV target: >3x CAC.",
}
return next((v for k, v in costs.items() if k in product_type.lower()), "Cost estimate not available.")
@tool
def assess_risk(area: str) -> str:
"""Assess risks in a specific area."""
risks = {
"regulatory": "FDA oversight if medical claims made. GDPR for EU users. Avoid 'diagnosis' language.",
"competition":"Big Tech (Google, Apple) could replicate. Moat needed: proprietary data or partnerships.",
"retention": "Meal planning has high churn. Gamification and social features improve D30 retention by 40%.",
"funding": "Seed rounds averaging $1.5M for consumer health apps. Strong traction needed before Series A.",
}
return next((v for k, v in risks.items() if k in area.lower()), "Risk data not found.")
# Subagents
market_subagent = create_agent(
model=make_llm(),
tools=[search_market_data, find_competitors],
system_prompt="You are a market research specialist. Analyse market opportunity and competition. Be concise and data-driven.",
)
financial_subagent = create_agent(
model=make_llm(),
tools=[estimate_costs],
system_prompt="You are a startup financial analyst. Estimate costs, revenue potential, and break-even timelines. Be specific with numbers.",
)
risk_subagent = create_agent(
model=make_llm(),
tools=[assess_risk],
system_prompt="You are a startup risk advisor. Identify the top risks and suggest concrete mitigation strategies.",
)
# Subagents as tools for the main agent
@tool("market_research", description="Research market size, trends, and competitors for a startup idea.")
def call_market_agent(query: str) -> str:
result = market_subagent.invoke({
"messages": [{"role": "user", "content": query}]
}
)
return result["messages"][-1].content
@tool("financial_analysis", description="Estimate costs, revenue, and financial viability of a startup idea.")
def call_financial_agent(query: str) -> str:
result = financial_subagent.invoke({
"messages": [{"role": "user", "content": query}]
}
)
return result["messages"][-1].content
@tool("risk_assessment", description="Identify key risks and mitigation strategies for a startup idea.")
def call_risk_agent(query: str) -> str:
result = risk_subagent.invoke({
"messages": [{"role": "user", "content": query}]
}
)
return result["messages"][-1].content
main_agent = create_agent(
model=make_llm(),
tools=[call_market_agent, call_financial_agent, call_risk_agent],
system_prompt=(
"You are a senior startup advisor. When asked to evaluate a startup idea, "
"delegate to all three specialist subagents (market research, financial analysis, risk assessment), "
"then synthesise their findings into a clear, structured investment brief with a final verdict."
),
)
def evaluate(idea: str) -> None:
print(f"\n{'═'*65}")
print(f"IDEA: {idea}")
print(f"{'═'*65}")
result = main_agent.invoke({
"messages": [{"role": "user", "content": f"Evaluate this startup idea: {idea}"}]
},
{
"callbacks": [langfuse_handler] # if you don't want to use langfuse, remove callbacks and handler
}
)
print(result["messages"][-1].content)
if __name__ == "__main__":
evaluate("An AI-powered personalised meal planner app with weekly grocery delivery integration.")
Run app.py:
python agent.py
Demo
Demo Output on GitHub (better resolution)
Langfuse on GitHub (better resolution)
Market Research SubAgent SubTool Calls:

Risk Assesment SubAgents SubTool Call 5 Times:
Conclusion
In this post, we mentioned:
- difference subagents and subagents-as-tools,
- how to define agents, subagents-as-tools, subagents own tools,
- how to follow/debug with langfuse.
If you found the tutorial interesting, I’d love to hear your thoughts in the blog post comments. Feel free to share your reactions or leave a comment. I truly value your input and engagement 😉
For other posts 👉 https://dev.to/omerberatsezer 🧐
References
- https://docs.langchain.com/oss/python/langchain/overview
- https://langfuse.com/
- https://aws.amazon.com/bedrock
- https://github.com/omerbsezer/Fast-LLM-Agent-MCP/
Your comments 🤔
- Which tools are you using to develop AI Agents (e.g.
AWS Strands, Langchain, etc.)? Pleasemention in the comment your experience, your interest? - What are you thinking about
SubAgents as Tools?




Top comments (0)