DEV Community

Adam Laszlo
Adam Laszlo

Posted on

Deconstructing a Production-Ready AI Agent: A Beginner's Guide - Part 2

The Agent's Brain — Defining Logic and Reasoning with Strands Agents

2.1 Introduction: The "Blueprint" vs. the "Factory"

In Part 1, we established a secure "front door." A user can now authenticate and send a prompt to a secure API endpoint. The next question is: what handles that prompt?

This brings us to the most confusing—and most important—distinction in this modern AWS stack: the difference between Strands Agents and Amazon Bedrock AgentCore.

To clarify this, one can use an analogy of building a sophisticated robot:

  • Strands Agents (SDK): This is the "Blueprint." It is an open-source Python framework that a developer uses to define the robot's logic. This is the code for its "brain" (how it reasons), its "memory" (how it remembers), and the "tools" it can use (its "hands"). Strands is the what.

  • Amazon Bedrock AgentCore: This is the "Secure Factory." It is the fully-managed AWS platform where the "Blueprint" is deployed. AgentCore is the "where." It takes the Strands code and handles what one article calls the "mountain of engineering" required to run it securely and at scale: the servers, the security, the isolation, and the monitoring.

Amazon Bedrock AgentCore is framework-agnostic. This means the "Secure Factory" doesn't care which blueprint is used. The szakdolgozat repository chooses to deploy a Strands agent, but AgentCore could just as easily run an agent built with LangGraph or CrewAI.

This part of the series focuses only on the "Blueprint": the Strands Agents SDK.

2.2 What is Strands Agents? The Open-Source "Blueprint"

Strands Agents is an open-source framework, developed by AWS, for building production-ready, multi-agent AI systems. Its core philosophy is "Model-Driven Orchestration".

This philosophy marks a significant departure from traditional programming:

Traditional Code (Imperative): A developer writes explicit if/then logic. "If the user says 'stock report', then call function_A, then function_B, then function_C." This is rigid and breaks easily.

  • Strands (Declarative): A developer does not write this explicit if/then logic. Instead, they simply:
  1. Define a Prompt: (e.g., "You are an expert financial assistant.")

  2. Define a List of Tools: (e.g., "Here are the tools you can use: gather_stock_data, analyze_stock_performance.")

Strands then leverages the Large Language Model (LLM) itself as the orchestrator. It uses the model's advanced reasoning capabilities to plan the necessary steps, chain its thoughts, call the tools it needs, and reflect on the results to achieve the user's goal. This is the essence of "agentic" behavior.

2.3 Inside the szakdolgozat Repo: How Strands Builds the "Brain"

When the user's prompt (e.g., "Analyze SIM_STOCK") arrives from API Gateway, the Strands agent logic kicks in via its "Event Loop". This is a "Reason-Act" (ReAct) cycle:

  1. The user's prompt enters the Strands event loop.
  2. Strands passes the prompt, the conversation history (memory), and the list of available tools to the LLM (e.g., Claude 4, as in ).
  3. **** The LLM "thinks out loud" (a process of reasoning) and determines its plan: "To fulfill this complex request, I must first call the gather_stock_data tool."
  4. [Act] Strands detects this "tool use" request from the LLM, validates it, and executes the corresponding Python function.
  5. The tool returns its data (e.g., a JSON blob of stock information). Strands adds this result to the ongoing conversation history.
  6. Strands "loops," sending the new context (including the stock data) back to the LLM.
  7. **** The LLM re-evaluates: "Great. I have the data. Now I must call the analyze_stock_performance tool to make sense of it."
  8. [Act] Strands executes the analyze_stock_performance tool.
  9. This "Reason-Act" loop continues, with the LLM planning and Strands executing, until the LLM decides the task is fully complete and generates a final, comprehensive answer for the user.

Giving Your Agent "Hands" with the @tool Decorator

The practical magic of Strands lies in its simplicity. To give the agent "hands" (tools), a developer doesn't need to write complex API handlers. notes: "Any Python function can be easily used as a tool with the Strands @tool decorator."

A hypothetical example from the szakdolgozat repository's logic might look this simple:

from strands.tools import tool
import financial_api

@tool
def gather_stock_data(ticker_symbol: str) -> dict:
    """
    Fetches comprehensive stock data for a given ticker symbol, 
    including price, volume, and market cap.
    """
    #... code to call a financial API...
    data = financial_api.get_data(ticker_symbol)
    return data
Enter fullscreen mode Exit fullscreen mode

By simply adding the @tool decorator, this Python function and its docstring (which is critical for the LLM to understand what the tool does) are now part of the "tool-belt" that Strands will show to the LLM during its reasoning step.

Strands is also designed for more complex systems, offering simple primitives for :

  • Handoffs: Passing a task from one specialized agent to another (e.g., from a "Data Gathering Agent" to a "Report Writing Agent").

  • Swarms: A group of agents working in concert.

  • Agent-as-Tool (A2A): One agent can be used as a "tool" by a parent agent.

2.4 Strands vs. LangGraph: A Quick Aside

The research material also provides examples of agents built with LangGraph. Both Strands and LangGraph are powerful frameworks for building agents, and both can be deployed on Bedrock AgentCore. The key difference lies in their philosophy:

  • LangGraph: Requires the developer to explicitly define a graph (a flowchart). The developer defines the nodes (agents or tools) and the edges (the path the logic must follow). This is more structured and deterministic.

  • Strands: (In its simplest form) is more dynamic. It is "model-driven". The developer does not define the explicit path; they trust the LLM's reasoning to create the path on the

The szakdolgozat repository's choice of Strands is a specific design decision. It represents a bet on the increasing power of LLM reasoning, trading the rigid control of a pre-defined graph for the dynamic flexibility of a model-driven orchestrator.

We now have a complete "blueprint" for our agent. We have used the Strands SDK to define what our agent is, how it thinks, and what tools it can use. This gives us a set of Python files—a "prototype." But this code cannot run itself, and it is not yet secure or scalable. In Part 3, we will take this "blueprint" and move it into the "secure factory" (AgentCore) to bring it to life.

Top comments (0)