Three days. That's what it took to build a sophisticated financial intelligence demo orchestrating three specialized MCP servers using AWS Strands and SAP Generative AI Hub. The result? A complete demo for SAP TechEd to showcase a 30% potential reduction in financial analysis time.
Not because building agentic systems is trivial, but because integrating AWS and SAP's generative AI stacks with the right architectural decisions makes complex demo scenarios tractable.
The Challenge: Demonstrating Enterprise AI Integration
Most AI agent tutorials showcase simple, single-tool agents. But demonstrating enterprise-grade AWS and SAP integration requires more:
- Multiple data sources requiring specialized processing
- Cross-system coordination without hardcoded workflows
- Production-grade patterns and governance
- Observable, maintainable architectures
When creating our Devtoberfest session on building multi-tool research agents, we wanted to demonstrate real enterprise integration patterns—showcasing how SAP's Generative AI Hub connects with AWS Bedrock through the AWS Strands SDK.
The Foundation: Research Agent with AWS Strands
We started with a deep research agent demo using the AWS Strands Agents SDK and Tavily API for web intelligence:
from strands import Agent, tool
@tool
def web_search(query: str, time_range: str = None) -> str:
"""Search the web and return ranked results"""
results = tavily_client.search(
query=query,
max_results=10,
time_range=time_range
)
return format_search_results(results)
@tool
def web_extract(urls: list[str]) -> str:
"""Extract full page content from URLs"""
return tavily_client.extract(urls=urls)
@tool
def web_crawl(url: str, instructions: str = None) -> str:
"""Crawl websites and discover nested links"""
return tavily_client.crawl(
url=url,
max_depth=2,
instructions=instructions
)
# Create the agent
deep_researcher_agent = Agent(
model=bedrock_model,
system_prompt=RESEARCH_SYSTEM_PROMPT,
tools=[web_search, web_extract, web_crawl, format_research_response]
)
What makes AWS Strands different? It's model-driven, not workflow-driven. You provide tools and a system prompt—the LLM handles planning, reasoning, and orchestration. This shifts complexity from code into the model's weights.
Built-in Production Observability
AWS Strands automatically tracks critical metrics using OpenTelemetry:
| Metric Category | What It Tracks | Demo Value |
|---|---|---|
| Token Usage | Input/output/total tokens | Cost estimation |
| Performance | Latency and execution times | Benchmark tracking |
| Tool Usage | Call counts and success rates | Reliability assessment |
| Event Loops | Reasoning cycles | Efficiency analysis |
This integrates seamlessly with AWS X-Ray and CloudWatch for enterprise observability patterns.
The Innovation: Multi-Server Financial Intelligence Demo
Our demo showcases financial analysis requiring coordination of multiple specialized systems. That's where Model Context Protocol (MCP) becomes critical.
Understanding MCP: The USB-C for AI
Anthropic open-sourced MCP in November 2024 to solve the "N×M problem"—every model needing connectors to every data source.
MCP provides a universal standard: One protocol, any model, any data source. Major providers including OpenAI and Google DeepMind adopted it within months.
The protocol uses JSON-RPC 2.0 with three primitives:
- Tools: Executable functions
- Resources: Structured data
- Prompts: Instruction templates
Architecture Overview: How Everything Connects
Here's the complete system architecture showing how AWS Strands orchestrates multiple MCP servers through SAP GenAI Hub:
Walking Through the Architecture (4 Key Stages)
Stage 1: Enterprise User Request
Enterprise users interact with the AWS Strands Agent through SAP GenAI Hub, which provides the secure gateway to Anthropic's Claude models via Amazon Bedrock.
Stage 2: AI Agent Orchestration
The AWS Strands SDK handles multi-tool coordination. The MCP Client within Strands manages all communications with downstream servers, reasoning about which tools to invoke and when.
Stage 3: MCP Protocol Communications
The MCP Session Manager maintains persistent connections to all three specialized servers, aggregating 10+ financial tools into a unified interface. This eliminates connection overhead and provides seamless cross-server coordination.
Stage 4: Orchestrated Results
The system synthesizes data from all servers to produce comprehensive outputs: investment analysis reports, risk assessment matrices, sentiment analysis, and cross-server coordination reports.
Three Specialized MCP Servers (Demo Architecture)
We built three demo servers, each handling distinct financial intelligence capabilities:
| Server | Port | Implementation | Purpose | Key Tools |
|---|---|---|---|---|
| Financial Data | 8001 | FastAPI (Manual) | Real-time market data | Stock quotes, fundamentals, health scoring |
| Document Analysis | 8002 | FastMCP Framework | Sentiment analysis | PDF parsing, report analysis, metric extraction |
| Analytics | 8003 | FastMCP Framework | Advanced analytics | Comparison charts, risk assessment, trend analysis |
Why Two Approaches?
FastAPI (Manual Implementation):
- Full control over JSON-RPC protocol
- ~150-200 lines for basic server
- Deep MCP understanding required
- Best for learning fundamentals
FastMCP Framework:
- Automatic protocol handling
- ~50-75 lines for basic server
- 3-4x faster development
- Production-ready features built-in
Both approaches demonstrate viable patterns. Your choice depends on control vs. velocity requirements.
Here's a FastMCP server example:
from fastmcp import FastMCP
mcp = FastMCP("document-analysis-server")
@mcp.tool()
def analyze_financial_report(content: str) -> dict:
"""Analyze financial text for sentiment and insights"""
positive_keywords = ['growth', 'profit', 'strong', 'improved']
negative_keywords = ['decline', 'loss', 'weak', 'reduced']
# Sentiment analysis logic
sentiment = calculate_sentiment(content, positive_keywords, negative_keywords)
return {
"sentiment": sentiment,
"confidence_score": confidence,
"key_findings": extract_findings(content),
"identified_risks": identify_risks(content)
}
if __name__ == "__main__":
mcp.run(transport="http", host="127.0.0.1", port=8002)
The Session Manager Pattern
Managing connections to three MCP servers in our demo required persistent sessions without context manager complexity—as shown in Stage 3 of the architecture diagram.
The solution: A custom MCPSessionManager using Python's ExitStack:
from util.mcp_session_manager import MCPSessionManager
# Initialize manager
mcp_manager = MCPSessionManager()
# Establish persistent connections (Stage 3)
mcp_manager.start_sessions({
"financial_data": "http://127.0.0.1:8001/mcp",
"document_analysis": "http://127.0.0.1:8002/mcp",
"analytics_reporting": "http://127.0.0.1:8003/mcp"
})
# Aggregate tools from all servers
all_tools = mcp_manager.get_all_tools()
# Create unified agent (Stage 2)
financial_agent = Agent(
model=sap_genai_hub_model,
tools=all_tools,
system_prompt=financial_expert_prompt
)
This pattern eliminates boilerplate while demonstrating enterprise requirements: connection pooling, error recovery, and audit logging.
Demo Results: AWS + SAP Integration in Action
Following the architecture flow from Stage 1 → Stage 4, when a user asks: "Provide comprehensive investment analysis for SAP", the agent automatically:
- Fetches stock data (Financial Server) → Current metrics
- Analyzes sentiment (Document Server) → Report assessment
- Calculates risk (Analytics Server) → Investment scoring
- Synthesizes report (Stage 4 Outputs) → Executive-ready recommendation
No explicit orchestration. No hardcoded workflows. The agent reasons about tool usage and coordinates automatically across all three MCP servers.
Demo Performance Metrics
From our Devtoberfest proof-of-concept:
- 30% potential reduction in comprehensive financial analysis time
- 10-20% efficiency gains demonstrated for individual stock analysis
- Automatic metrics tracking via AWS Strands observability
- Production-ready monitoring patterns through CloudWatch integration
Enterprise Security: SAP GenAI Hub Integration (Stage 1)
The demo showcases how SAP Generative AI Hub provides critical governance when integrating with AWS:
- ✅ Content filtering on inputs and outputs
- ✅ Data masking for sensitive information
- ✅ Centralized policies across SAP ecosystem
- ✅ Compliance support for regulatory requirements
The Hub orchestrates access to Amazon Bedrock models (Claude 3.5, Titan) while maintaining security boundaries essential for enterprise deployments—all happening at Stage 1 of our architecture.
When to Use This Architecture Pattern
This demo architecture excels when you need to:
✅ Coordinate 3+ specialized systems or data sources
✅ Rapid prototyping with clear production path
✅ Model-driven flexibility over explicit workflows
✅ Standard protocols (MCP) for future extensibility
✅ Built-in observability for production monitoring
✅ Enterprise security and governance
What's Next: From Demo to Production
The demo system showcases integration possibilities:
- SAP Integration: Connect MCP servers to SAP business processes
- Multi-Tenant Deployments: Shared MCP infrastructure for multiple organizations
- Hybrid Architectures: On-premises SAP + cloud-native AI services
- Domain-Specific Agents: Specialized agents for procurement, finance, HR
Try It Yourself
Both notebooks are available in our GitHub repository. The progression from research agent to multi-server orchestration provides a practical learning path.
Key Takeaways:
- Start Simple: Build single-agent systems first
- Learn MCP: Understand the protocol fundamentals
- Scale Thoughtfully: Use frameworks and patterns for production
- Secure by Design: Implement proper auth, audit, monitoring
- Observe Everything: Leverage built-in observability
Full Technical Deep-Dive + Video Tutorial
Want the complete implementation with detailed architecture walkthroughs, video tutorial, and production deployment guidance?
👉 Watch the video tutorial and read the full guide on MyITBasics
This includes:
- Step-by-step video tutorial walking through the entire demo
- Detailed MCP protocol implementation
- AWS and SAP integration patterns
- High-resolution architecture diagrams
- Cost analysis and ROI calculations
- AgentCore platform integration
- Enterprise architecture considerations
- Complete code samples and notebooks
Discussion Questions
I'd love to hear your experiences:
- What challenges have you faced orchestrating multiple AI agents?
- How do you approach AWS and SAP GenAI integration in your projects?
- What's your strategy for securing enterprise AI integrations?
Abraham Arellano Tavara | Senior Solutions Architect, AWS Munich | LinkedIn

Top comments (0)