Agents are end-to-end systems assigned to a specific Goal. These systems operate with varying degrees of autonomy and can be classified based on how they execute tasks:
Workflow-Based Systems: All steps are explicitly defined. The AI simply generates content or executes actions following these strict instructions.
Agentic Workflows: The system is equipped with tools, knowledge bases, and boundary conditions. The AI decides the trajectory of its execution to reach the desired outcome. This often includes Human-in-the-loop (HITL) handoffs or partial autonomy, where the AI seeks clarification when a path is unclear.
Fully Autonomous Agents: The AI navigates the task independently, guided only by high-level safeguards such as budget constraints or iteration limits.
Patterns vs. Techniques
Agentic Patterns refer to the end-to-end blueprint of a process. In contrast, Techniques are the granular capabilities—such as planning, tool-calling, and self-reflection—that make up that blueprint.
The Building Blocks of Agents
LLM (The Brain): Interprets signals and manages the reasoning process to reach the end goal.
Tools: The action-performing interfaces that allow an LLM to execute specific parts of a process.
Retrieval: The system's knowledge center, where the LLM enriches its context regarding the query at hand.
Design Patterns
*LangChain *(the most widely used framework for building LLM-based applications) provides excellent documentation on conceptual design patterns.
These patterns showcase a transition from rigid workflows to autonomous agents. This is significant because workflows can be used to create specific "nodes" designed to solve a particular problem within a larger agentic setup.
- Chaining: This is a method of connecting a series of defined steps to arrive at a final outcome. It helps divide complex requirements into smaller, clearer, and more manageable sub-requirements. This design assumes that a particular sub-task is dependent on the output of its predecessor in the chain.
Example: Building a regex generator where the input is a natural language description.
- Parallelization: If we have sub-tasks that can be independently executed, then we can run them in parallel rather than in sequence.
Example: A stock analysis workflow where, given a company name, individual analysis tasks (technical, fundamental, news) are invoked simultaneously to generate a comprehensive report.
- Routing: When you have multiple use cases converted into their respective process flows, you can plug in a router on top. The router analyzes the requirement and directs the process to a specific specialized flow. This introduces a level of autonomy where an AI makes a decision based on the input.
Example: A customer support system where an incoming complaint email is analyzed by a router and sent to the relevant department's workflow (e.g., billing, technical support, or returns).
- Orchestrator-worker: This pattern involves a central "orchestrator" that plans the steps for a given request and delegates those steps to "workers." Once all workers have finished, their outputs are collated to create the final outcome. This is particularly useful when there isn't a fixed, predictable blueprint for the process.
- Evaluator-Optimizer: This design pattern is commonly used in "reflection agents." An LLM generates an initial response, which is then evaluated either by a human or by another LLM call with specific evaluation instructions. This is an iterative mode of execution, often controlled by a maximum iteration count.
Example: An entity extraction agent where you feed in text and it extracts the required data. To increase accuracy, you can equip this agent with an evaluator node where a human or another LLM checks the output and suggests improvements if the original extraction is incorrect.
Agents
Agents (per LangChain’s interpretation) refer to systems that combine the LLM (the brain) with actions using tools to solve a problem. These tools act as wrappers that enable the LLM to call external APIs, other agents, or workflows, utilizing their output to solve the task at hand.
How does an LLM make a tool call?
When designing an agentic system, the LLM is provided with a list of available tools, descriptions of what they do, and the arguments they require. The LLM then generates a structured output (such as JSON) specifying which tool it wants to call and the arguments it wants to pass. This output is used to trigger the actual function (the tool wrapper) outside of the LLM’s core generation. The result of that function is then appended back into the LLM's context so it can decide on the next step.






Top comments (2)
This is a solid overview. In real deployments, reliability usually comes down to tool allowlists, structured outputs and logging every tool call so you can debug decisions later. Are you planning a follow-up on evaluation/testing for agents (like task success metrics + regression tests)?
Great overview — especially the distinction between workflows and more agentic systems.
In practice, what usually makes you choose a more complex orchestrator/worker or routing setup instead of a simple chained workflow? Is it mainly task variability, or do observability and evaluation play a big role as well?
Thanks for the clear explanation!