**Introduction
**Large language models have dramatically improved the ability of machines to generate text, code, and explanations. However, most current AI systems still operate in a reactive way: a user provides a prompt and the model produces a response.
The research paper “The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications, Evaluation Metrics, and Challenges” (2025) examines how artificial intelligence is moving beyond this reactive pattern toward goal-driven autonomous systems i.e. needing no human intervention often described as Agentic AI.
The authors review a large body of literature and identify how modern AI systems are being designed to plan tasks, interact with external tools, maintain memory, and adapt their behavior during execution. Instead of producing a single answer, these systems attempt to complete entire workflows.
This blog discusses the key contributions of the paper, connects its ideas to concepts from artificial intelligence course (especially intelligent agents and search algorithms), and reflects on insights gained through manual reading and analysis using NotebookLM.
**The Goal of the Paper
**The main objective of the paper is to clarify a rapidly evolving research area. The term Agentic AI has been used widely in recent discussions about AI systems that can operate autonomously, but definitions and architectures vary significantly across studies.
To address this, the authors perform a structured review of existing research and focus on several aspects:
• definitions of agentic AI
• system architectures used to build agents
• frameworks that support agent-based systems
• evaluation methods
• real-world applications
• open challenges
Through this analysis, the paper attempts to provide a framework for understanding how autonomous AI agents are built and evaluated.
**What Makes AI “Agentic”?
**Traditional AI models typically operate through a simple process:
User Prompt → Model Output
Agentic systems introduce a loop of reasoning and action that more closely resembles decision-making in intelligent agents. Instead of producing a single response, the system may repeatedly perform the following cycle:
interpret a goal
generate a plan
execute an action
observe results
revise the plan if necessary
This loop enables the system to pursue longer and more complex tasks.
For example, if a user asks an AI system to build a small application, a traditional model might simply produce a block of code. An agentic system could instead break the task into steps such as requirement analysis, code generation, testing, debugging, and documentation.
This difference represents a shift from text generation to autonomous problem solving.
**Architecture of Agentic AI Systems
**The paper identifies several components that appear repeatedly in agent-based architectures.
Perception
Agents first gather information from their environment. This may include user input, documents, APIs, databases, or external tools.
Planning
Planning is responsible for decomposing a large objective into smaller tasks. Some systems rely on language models themselves for planning, while others combine them with specialized planning algorithms.
Execution
The execution stage performs the planned actions. These may involve tool usage, database queries, code generation, or interaction with other agents.
Memory
Agentic systems often include both short-term and long-term memory. This allows them to store previous steps and use that information when making future decisions.
Reflection and Evaluation
Some frameworks introduce reflection mechanisms where the system reviews its own output and attempts to improve the result.
Together, these components create a continuous reasoning loop, allowing agents to adapt their strategy during task execution.
**Connection to AI Course Concepts
**One of the most interesting aspects of the paper is how closely its ideas align with foundational concepts taught in artificial intelligence courses.
Intelligent Agent Model
In classical AI theory, an agent is defined as a system that perceives its environment and acts upon it in order to achieve goals.
Agentic AI systems essentially implement this model using modern machine learning techniques. The perception–action cycle described in textbooks is reflected directly in the architecture of these systems.
However, large language models introduce an additional capability: natural language reasoning, which allows agents to interpret goals expressed in human language.
**Relationship to Search Algorithms
**The planning behavior described in the paper is closely related to search problems studied in AI.
Algorithms such as A* search attempt to find an optimal path from an initial state to a goal state by exploring possible actions. Agentic systems perform a similar process when they generate task plans.
For instance, consider a programming task assigned to an agent:
Initial state: no code written
Goal state: working program
Possible actions may include generating functions, testing outputs, identifying errors, and rewriting code. The agent explores different action sequences until the goal is achieved.
Although modern agents rely heavily on language models rather than explicit heuristic functions, the underlying idea of state exploration toward a goal remains very similar to classical search strategies.
**Frameworks Supporting Agentic AI
**The paper also reviews several frameworks that help developers build agent-based systems. These frameworks provide tools for task planning, memory management, and integration with external services.
Examples include:
• LangChain
• AutoGPT
• MetaGPT
• OpenAgents
These frameworks attempt to transform language models into task-oriented agents capable of interacting with software tools, databases, and other AI systems.
The rapid development of such frameworks indicates that agentic AI is becoming an important direction in applied AI development.
**Key Challenges Highlighted in the Paper
**Despite promising progress, the authors emphasize that agentic AI systems still face significant limitations.
Reliability
Agents sometimes generate incorrect plans or become stuck in repetitive loops. Ensuring stable behavior remains a major challenge.
Long-term reasoning
Complex tasks may require dozens of steps, but current systems often struggle with maintaining consistent reasoning over long sequences.
Multi-agent coordination
When several agents collaborate, communication and synchronization become difficult.
Safety and control
Because agentic systems can act autonomously and access external tools, developers must carefully manage risks related to misuse or unintended actions.
These challenges show that agentic AI remains an active research area rather than a solved problem.
**Insights from Manual Reading vs NotebookLM
**Reading the paper manually helped me understand the structure of the research and the authors’ arguments. The detailed discussion of architectures and evaluation methods required careful reading to fully grasp how different frameworks approach the design of agents.
Using NotebookLM provided a different kind of assistance. By uploading the paper into NotebookLM, I was able to ask targeted questions about specific sections and quickly clarify unfamiliar terms or frameworks. This was particularly useful when comparing different agent architectures mentioned in the review.
One important observation from using both approaches is that manual reading is essential for understanding context, while AI-assisted tools help accelerate exploration of complex material. NotebookLM was especially helpful for identifying connections between different parts of the paper that might otherwise require multiple readings.
**Why Agentic AI Matters for the Future of AI
**The transition from reactive models to autonomous agents may significantly change how artificial intelligence is used in practice.
Instead of interacting with AI as a question-answering system, users may increasingly rely on agents that can perform entire tasks independently, such as:
• writing and debugging software
• conducting research
• managing digital workflows
• analyzing large data sets
However, as the paper emphasizes, improvements in planning reliability, safety mechanisms, and evaluation metrics will be necessary before such systems can be deployed widely in critical environments.
**Conclusion
**The paper “The Rise of Agentic AI” provides an important overview of a rapidly developing area in artificial intelligence. By reviewing existing definitions, architectures, and frameworks, it clarifies how modern AI systems are evolving toward goal-oriented autonomous agents.
For us aka students studying artificial intelligence, the paper highlights how foundational concepts such as agent models and search algorithms remain highly relevant even as new technologies emerge. The combination of traditional AI principles with modern language models is shaping the next generation of intelligent systems
Top comments (0)