Week 3 - Building Agents with SDKs and Improving Discovery with AI
Batch 09 – BeSA Cloud Academy
Disclaimer:
These notes were drafted using AI for clarity, structure, and readability. They are intended solely for learning purposes.
These are the structured notes from Week 3, focused only on the two role plays. Writing this as a quick revision for those who attended the session and a concise recap for anyone who couldn’t make it.
Role Play 1 – Technical Session: Getting Started with Strands Agent
Context
This conversation focused on the practical challenges of building agents and how a standardized SDK approach can simplify development.
The customer started with a basic understanding of what is needed to build an agentic AI system.
Core Components Required for an Agent
To build an agent system, several components are required:
Infrastructure
- Cloud or on-premise environment to run the workloads.
Foundation Model
- Acts as the “brain” of the agent.
Supporting Services
- Security
- Memory for conversations
- Observability
- Orchestration
The solutions architect confirmed that this understanding is correct and forms the baseline for agent architectures.
Common Challenges When Building Agents
The discussion highlighted several practical challenges teams face:
Steep learning curve
- Multiple frameworks
- Different SDKs
- Rapidly evolving ecosystem
Complex orchestration
- Managing how agents call tools
- Handling multi-step workflows
Black box behavior
- Limited visibility into what the agent is doing
- Hard to debug reasoning steps
Language and framework fragmentation
- Switching between tools and languages increases complexity.
The main theme here was the need for standardization.
What is Strands Agent
Strands Agent was introduced as a way to simplify agent development.
Definition
- An open-source SDK designed for building agents using minimal code.
Conceptually it combines:
- Models (brain)
- Tools (hands)
This allows developers to focus on agent behavior rather than infrastructure complexity.
Understanding SDK vs Framework
An important clarification was made around SDKs and frameworks.
SDK (Software Development Kit)
- Collection of tools, libraries, and documentation.
- Helps developers build applications faster.
- Provides reusable building blocks.
Analogy used:
- Like Lego pieces.
- Instead of creating every component from scratch, you assemble existing blocks.
Framework
- Defines architectural structure and rules.
- Determines how components interact.
Analogy used:
- Like a blueprint for a building.
Strands essentially provides both:
- The framework structure
- The SDK tools to implement it.
Why Use Strands
Key benefits mentioned:
Ease of use
- Few lines of code to build agents.
Native AWS integrations
- Works naturally with AWS services.
Model agnostic
- Can work with different models such as Claude, OpenAI models, or Llama.
Rapid experimentation
- Developers can iterate and deploy faster.
Agent Interaction Flow
The discussion explained how the components interact.
Agent
- Acts as the orchestrator.
Prompt
- User input that triggers the workflow.
Model
- Performs reasoning.
- Determines which tools are needed.
Tools
- Execute actions such as API calls or sending emails.
Response
- Final output returned to the user.
This cycle operates continuously in what was described as an agentic loop.
Typical workflow:
Prompt → Reason → Tool Selection → Tool Execution → Response
Working with Models
The example showed how developers can:
- Import the agent
- Specify the model (example: Claude 3.5 Sonnet)
- Provide system instructions
- Invoke the agent
Another interesting point discussed was running models locally using Ollama.
This allows developers to:
- Experiment locally
- Avoid cloud dependency during development
- Prototype faster.
Tools in Strands
Tools were compared to tools used by craftsmen.
Just like a carpenter needs specific tools, an agent requires the right tools to perform tasks.
Two types of tools were mentioned:
Pre-built tools
Examples include:
- HTTP request tools
- Calculator tools
Custom tools
Developers can create tools using a simple decorator approach in Python.
Example concept:
- Define a function
- Add a tool decorator
- The agent can now invoke it.
This enables developers to connect agents to internal APIs or services.
Model Context Protocol (MCP)
A key concept introduced was MCP.
Definition
- An open standard for connecting AI systems to external tools and services.
Analogy used:
- A USB hub.
Your laptop might have one port, but the hub allows connection to many devices.
Similarly, MCP allows agents to interact with multiple systems through a standardized interface.
Benefits include:
- Reduced integration complexity
- Consistent communication format
- Easier expansion of agent capabilities.
Role Play 2 – Behavioral Session: Using AI to Accelerate Discovery
Context
This conversation explored how architects can use AI tools to prepare for customer engagements and accelerate discovery.
The scenario involved a solutions architect preparing for a new customer meeting with very little time.
Traditional Preparation vs AI-Assisted Preparation
Traditionally, preparing for a customer engagement could take several days.
Typical workflow:
- Research the company
- Understand industry trends
- Identify likely technical challenges
- Prepare discovery questions
Using AI changes this process significantly.
Instead of multiple days, preparation can be compressed into a few hours.
Initial Research with AI
The architect begins by briefing the AI with basic information about the customer:
- Industry
- Market size
- Business trends
- Competitive pressures
The AI then generates insights such as:
- Regulatory environment (e.g., GDPR)
- Industry modernization challenges
- Technical considerations like latency sensitivity.
This provides a fast baseline understanding.
Adding Human Context
However, AI does not understand internal personalities or organizational dynamics.
This is where the TAM adds valuable insight.
Examples discussed included:
A cost-focused CTO
- Strong requirement to reduce costs.
A risk-averse CISO
- Concerned about customer data protection.
An engineering leader with a small team
- Limited capacity to manage complexity.
This human context becomes critical.
Combining AI insights with relationship knowledge produces much better preparation.
Anticipating Objections
The architect then uses AI to anticipate objections.
Example approach:
- Feed AI the stakeholder concerns.
- Ask it to generate likely objections or concerns.
This allows the architect to prepare responses in advance rather than reacting in the meeting.
Generating a Discovery Framework
AI can also help generate a discovery framework.
This includes:
- Business drivers
- Technical risks
- Modernization priorities
- Operational constraints
However, these questions are often generic.
The architect must adapt them to the specific context of the customer.
Example:
Generic question
- What are your modernization goals?
Contextual question
- How is your small engineering team managing technical debt during modernization?
AI provides the structure, while the architect adds depth.
Using AI After Meetings
Another useful technique discussed was the “raw notes dump.”
After the meeting, the architect:
- Pastes rough notes into the AI tool.
-
Asks it to identify:
- Explicit requirements
- Implicit concerns
- Risks
- Action items
The AI performs structured analysis on unstructured notes.
This helps convert messy meeting notes into organized documentation.
Producing Clean Documentation
The final step is creating clear documentation to share with the customer.
Examples include:
- Requirements summaries
- Key concerns identified
- Architecture considerations
- Next steps
This demonstrates that the architect is listening and thinking strategically.
Important Advice
One key warning from the conversation:
Do not walk into a customer meeting with a generic presentation.
Better approach:
- Use AI to understand the customer’s world.
- Combine that with the TAM’s relationship knowledge.
- Tailor discussions to real concerns.
The winning combination is: AI research + human insight.
Week 3 Consolidated Takeaways
From the technical role play:
- SDKs and frameworks can significantly reduce complexity when building agents.
- Standardization helps address fragmentation in the agent ecosystem.
- Tools and MCP enable agents to interact with external systems in a scalable way.
From the behavioral role play:
- AI can dramatically accelerate discovery preparation.
- Human context and relationships remain essential.
- AI works best as a research and analysis assistant rather than a decision maker.
This week shifted the focus from foundational concepts and architecture to practical workflows—both for building agents and for improving how architects engage with customers during discovery.

Top comments (0)