"The Rise of Agentic AI: A Review of Definitions, Frameworks, and Challenges"
Artificial Intelligence has developed at a rapid speed in the last few years. Today, many AI systems can answer questions, generate text, and help users complete different tasks. However, most traditional AI systems still work in a reactive way. These respond to prompts given by users or programmers but usually cannot plan larger and more complex tasks on their own.
The research paper “The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications, Evaluation Metrics, and Challenges” introduces the concept of Agentic AI, which aims to make AI systems more autonomous and capable of working toward specific goals gives the explannation of Agentic AI to solve the planning issue.
The main goal of the paper is to provide an overview of current research on Agentic AI and explain how these systems work. For this the authors reviewed more than 143 research papers that discuss different definitions, architectures, tools, and applications of Agentic AI.
The paper mainly explains how Agentic AI is different from traditional AI and even generative AI systems. While doing this it tells us that most AI systems simply respond to prompts, but Agentic AI systems can set sub-goals, plan their actions, use different tools, and perform multiple steps to complete a task with minimal human supervision.
To study the topic more clearly, the authors organized their research around several guiding questions, including:
How is Agentic AI defined, and what makes it different from other types of AI?
What frameworks and tools are used to build agentic systems?
What architectural components are needed for these systems?
What real-world tasks can Agentic AI perform?
How can these systems be evaluated?
What ethical or practical challenges might appear when using them?
In finding the answer to these questions, the paper tries to give researchers and developers a better understanding of how Agentic AI systems are designed and used.
The paper revolves around the idea that Agentic AI systems are built around several capabilities, such as they can plan their moves,have memory, can make reasoning, and tool usage. These abilities allow these systems to work in a more intelligent and independent way.
For example, if a user asks an AI system to plan a trip for the user the system may first identify this as a problem and set main goal which it will then break it into smaller tasks. These tasks might include searching for flights within the timeperiod, finding hotels and safe houses, and recommending tourists sites to visit. The system then completes these tasks in an order until the overall goal is achieved.
The paper also emphasizes on the idea of multi-agent systems. Is this it tells us that AI system handles everything, multiple agents can work together to complete a task. Each agent has a specific role, and by this they can solve more complicated problems.
Another vital point written in the paper is the use of frameworks that help developers build agentic systems. Some of the frameworks mentioned include LangChain, AutoGPT, BabyAGI, and MetaGPT. These frameworks allows AI systems to plan their tasks, store information using memory, and interact with external tools or services.
After this the paper also discusses several possible applications of Agentic AI. These include fields as business automation, scientific research, software development, and customer service. In these fields, agentic systems could help automate complex workflows that normally require significant human effort.
At the same time, the paper also tells us that all this is not that simple and comes with challenges. A major challenge is the reliability issue. Since Agentic systems can act independently, there is always a risk that they might make incorrect decisions. Because of this, developers must design these systems carefully and ensure that they operate safely.
Another concern is related to security and data privacy. If AI systems use multiple tools and data sources, there must be proper safeguards to protect sensitive information
In our course of AI, we learned that an agent is a system that can perceive its environment and take actions using its actuaters in order to achieve its goal. Agentic AI uses this idea by creating more advanced agents that can plan several steps ahead and perform more complex reasoning.
For example in a informed search algorithms like A*, an agent selects the best path to reach a goal by using heuristics(estimated costs) and functions for evaluations. However the paper does not directly modify the A* algorithm but it reasons on the idea of goal-based agents by allowing AI systems to break large problems into smaller steps and solve them one by one.
Another concept from our course that relates to this paper is multi-agent systems. In many AI problems multiple agents can work together to solve large tasks that are too complex for a single agent to handle. The paper shows that modern Agentic frameworks are already using this idea, where different agents work mutually together to complete complex tasks.
Because of this, the research helps connect the theoretical concepts we learn in class with the practical technologies that are currently being developed.
After reading the paper, I realized that AI systems are evolving faster than I we expect. I was not very well educated about AI's before this and I thought about AI's as systems that answers ourquestions or generate content based on our prompts. However, this paper showed me that newer AI systems are being designed daily to plan tasks and work toward goals more independently.
While exploring the paper using NotebookLM, I also noticed that building agentic systems requires knowledge from different areas of computer science. It is not only about machine learning models. It also involves system design, planning algorithms, and coordination between multiple agents.
Another interesting idea for me was the importance of memory and feedback loops in agentic systems. These systems can evaluate their own performance and adjust their strategy if something does not work correctly. This makes them more flexible and capable of improving their results during the task.
However, the paper also made it clear that there are still many challenges in the way. As AI systems become more autonomous, issues such as safety, reliability, and accountability become very important. If an AI system makes a wrong decision, it may be difficult to determine who is responsible for the outcome the system or the person handling it.
Moreover, both manually reading the paper and exploring it with NotebookLM helped me to better understand the future of of AI technologies and where they are headed. It seems likely that AI systems will gradually move from simple tools towards more advanced assistants that can cowork with humans in completing complex tasks.
The paper offers a detailed overview of Agentic AI and informs us why it is so important in the field of artificial intelligence research. By reviewing such research papers and frameworks, the authors are showing us how AI systems are moving from being purely reactive to being goal-directed agents that are capable of planning, reasoning, and acting on their own.
This research in direct way,connects with many of the concepts we studied in our AI course, especially about intelligent agents and multi-agent systems.But with the potential advantages this has, it also raises concerns about safety, reliability, and ethical use.
In the future, Agentic AI may play a significant role in the automation of complex tasks and assist human decision-making and handling of tasks. However, more research is needed to ensure that the safety, reliability, and trustworthiness of the developed AI are guaranteed.
LLM-Based Deep Search Agents: Transforming the Way AI Searches for Information
The landscape of computer science has dramatically transformed over the last few years, primarily due to advancements in Artificial Intelligence and Large Language Models. More recently, we have seen developments in the form of chatbots and AI assistants that can take questions from users, summarize lengthy pieces of text, or produce content in a variety of formats. Although the user experience indicates advanced computation by these systems, the underlying technologies are relatively rudimentary. Most of these systems rely on the data they have been trained on to generate an answer to a user's question or, less frequently, make a request for a small amount of information from an external source in response to a user's request.
The research paper “A Survey of LLM-based Deep Search Agents: Paradigm, Optimization, Evaluation, and Challenges” talks about a new concept called Deep Search Agents. These agents represent a new stage in AI systems. Instead of just responding to a prompt directly, they actually try to search for information step-by-step and combine results from different sources in order to produce a more detailed and useful answer.
The main purpose of the paper is to explain how deep search agents work, how they are designed, and what challenges researchers face while developing them. The authors reviewed a large number of previous studies and organized them according to their system architectures, optimization techniques, and application areas. This helps the reader understand the whole field in a clearer way.
According to the paper, the evolution of search systems can be divided into three stages. The first one is the traditional search method, where a user searches for information on the internet and then reads and combines the results manually. The second stage is LLM-based search, where language models help summarize the information that was retrieved or sometimes rewrite the query to make it better. However, these systems still rely mostly on traditional searching methods and cannot deal with very complex information problems.
Deep search agents try to solve this issue by adding decision-making ability to the searching process. These agents can understand what the user is asking, create a plan for finding the information, and then collect the necessary data before generating an answer. Instead of performing only one search, they may go through several search steps and keep updating their strategy as they find new information.
The paper explains that a typical search agent works in a process something like this. First, it analyzes the user’s question and tries to understand the intent behind it. After that, it creates a plan for how to search. Then it performs actions such as searching documents, browsing web pages, or retrieving data from internal memory. Once the information is gathered, the agent evaluates it and finally generates a response that answers the user’s question in the best possible way.
Another interesting idea discussed in the paper is how these agents structure their search process. Some agents use a parallel search strategy where multiple queries are generated at the same time. Others use sequential search, which means they search for information first, analyze it, and then decide what to do next. In some systems, a hybrid approach is used which combines both of these strategies.
The concept of multi-agent architectures is also discussed in the research. In complicated tasks, a single agent might not be enough to perform all operations efficiently. So researchers sometimes divide the work among multiple agents. For example, one agent might act as the planner that decides the search strategy, another agent performs the search itself, and another agent analyzes the collected evidence and creates the final response.
The paper also highlights several application areas where deep search agents can be useful. Many modern AI research assistants already use similar ideas. These systems can search large datasets from different sources and generate reports based on them. Besides research, deep search agents could also be useful in fields such as finance, medicine, software development, and scientific discovery.
Another capability of these agents is that they can search their own internal knowledge. This includes retrieving information from stored memory, previous interactions, and tools that the system has access to. Because of this, the system can sometimes improve its reasoning when solving complicated problems.
Even though deep search agents sound very promising, the paper explains that there are still many challenges. The degree of reliability for both individual agents (attribute providers), as well as the quality of their data, will greatly affect the aggregate levels of reliability. All of the individual agents would utilize multiple sources to collect their data, and many of these sources are known to be unreliable or inaccurate, therefore the data will need to have validation mechanisms in place to determine that the data they are using is reliable.
Another challenge is dealing with multiple types of data. Most current search agents mainly work with text information. But in real life, knowledge also exists in images, videos, graphs, and other formats. Building systems that can process all these different data types together is still an ongoing research problem.
Training deep search agents is also difficult. Reinforcement learning is often used to optimize these agents, but designing a good reward function is not easy. In many real-world problems there is not always one correct answer. Because of this, it becomes hard to measure how good the agent’s response actually is.
During the Artificial Intelligence course, we learned that an intelligent agent is a system that observes its environment and takes actions in order to achieve a goal. Deep search agents follow a similar concept. They act as goal-oriented systems that analyze their environment, plan actions, and gather information until they can provide an answer to the user.
For example, informed search algorithms like A* choose the best path to a goal by using heuristic evaluation. Deep search agents do not directly use the A* algorithm, but they use a somewhat similar idea. They explore different search paths and try to find the most useful information that can help solve the problem.
Another concept from the AI course that connects with this research is multi-agent systems. In these systems, several intelligent agents cooperate with each other to solve a problem. The survey paper shows that many modern search systems are starting to use this idea by dividing tasks between multiple agents.
While reading this paper, I realized that the way AI systems search for information is developing very quickly. Before reading it, I mostly thought of AI systems as tools that simply answer questions using knowledge they already have. But deep search agents show that AI can also actively search for information and combine different sources to build better answers. I became aware of how rapidly AI systems are evolving while reading this paper. Prior to reading it, I primarily thought of AI systems as tools that merely used their prior knowledge to provide answers to questions. However, deep search agents demonstrate that AI is also capable of actively searching for data and integrating various sources to create more comprehensive responses.
I also observed that creating such intelligent systems necessitates expertise in a variety of computer science fields. The development of these agents involves a number of disciplines, including natural language processing, information retrieval, machine learning, system architecture, and reasoning. For the system to function correctly, all of these elements must be integrated.
But the study also shows that much more needs to be done. Further study and development are still required for issues like information reliability, assessing responses, and managing various data types.
The paper concludes by giving a thorough overview of LLM-based deep search agents and outlining the reasons they are emerging as a significant area of artificial intelligence research. Deep search agents are a significant step toward more intelligent and autonomous AI systems because they enable AI systems to create search strategies, collect data from various sources, and reason about complex questions.
Even though this technology is still in its infancy, it has the potential to transform how AI aids in knowledge discovery, research, and decision-making. Deep search agents could develop into effective instruments in the future for resolving a variety of challenging real-world issues.
video link:https://youtu.be/5XArlf_gmWE
Special thanks to @raqeeb_26.
Top comments (0)