A "Metadata-First" Approach Using AWS Bedrock AgentCore
For a long time, discussions about AI and SAP have been polarized at two extremes. On one side, impressive demos that only work in "happy path" scenarios. On the other hand, abstract whiteboards.
Recently, I’ve been building an agentic architecture using AWS Bedrock AgentCore and SAP OData APIs. My goal wasn't just to "connect" an LLM to SAP, but to create an agent that actually understands OData as an expert would.
Here is what I’ve learned about making Agentic AI reliable enough for the SAP ecosystem.
The "Hallucination" Trap in OData
The biggest friction when placing a Generative AI model over SAP is its tendency to improvise. Left to its own devices, an LLM will:
-
Guess field names (like
SalesOrderIDinstead ofSalesOrder). - Ignore mandatory keys in EntitySets.
- Try to use unsupported OData filters.
In an enterprise environment, a 400 Bad Request or an incorrect data fetch is a non-starter.
Architecture: The "Three-Step Discovery" Pattern
To solve this, I moved away from a "single-prompt" approach. Instead, I implemented a strict Discovery -> Inspection -> Execution flow within AgentCore.
Instead of guessing, the agent is forced to follow a sequence:
Catalog Discovery: It first scans the SAP Service Catalog to find the correct API. This ensures the agent targets the specific service (like
API_SALES_ORDER) instead of hallucinating endpoint names.Metadata Inspection: It must fetch and parse the XML
$metadatabefore building any query. Beyond just reading EntitySets and Keys, the agent maps Navigation Properties. This is crucial: it allows the agent to understand how to navigate from a Sales Order Header to its Items or Business Partners dynamically, using OData$expandlogic rather than making multiple "blind" guesses.Strict Execution: It constructs a Full URL—including filters, selects, and expands—based strictly on the technical constraints learned in the previous step. The metadata acts as a "just-in-time" documentation for the LLM.
The result? The agent behaves less like a "chatbot" and more like a dynamic middleware that validates its own logic against the source of truth (SAP) before hitting 'Enter'.
Why Bedrock AgentCore?
I chose AgentCore because it treats Memory and Tools as first-class citizens.
SAP integrations need "state". If the agent learns about a specific Sales Order schema in step A, it needs to carry that technical context to step B without losing precision. By using session-aware memory and explicit tool boundaries, I can ensure that:
- The Agent reasons, but the Tools enforce.
- The agent has an Identity (actor-based), making auditing much easier.
- Prompt Engineering is secondary to Tool Design.
The Golden Rule: Metadata-First
The most important principle I've adopted is:
No SAP call should exist without metadata validation.
If the agent doesn't have the metadata for a service, it is prohibited from calling it. This "Constraint-Based Intelligence" is what makes the system viable. We don't want a creative agent; we want a compliant one.
Is it Ready for Production?
From an enterprise perspective, this approach aligns with what IT departments actually want:
- Security: Authentication remains in SAP (Basic or OAuth), and the agent only sees what the user is authorized to see.
- Governance: You aren't "training" a model on your data; you are giving a model a "window" to look at it.
- Scalability: It works with standard OData, meaning it supports S/4HANA (Public or Private) and even legacy ECC systems via Gateway.
Final Thoughts
After building this, one conclusion became clear: Agentic AI with SAP is viable — as long as the agent is more constrained than creative.
When we prioritize metadata over prompts and tools over improvisation, AI stops being a "black box" and begins to become a powerful, intelligent interface for complex ERP data.

Top comments (0)