When I was designing a rescue robot for flood-affected survivors in Gilgit-Baltistan for my AI assignment, I thought I was solving a university assignment, but after reading two research papers on Agentic AI and Deep search Agents, I realized I had accidentally designed an advanced AI system without knowing it. This blog shares what I found:
Summary:
The first paper, The Rise of Agentic AI by "Asadul Islam et al., 2025", solved a real problem because nobody had a clear definition or knowledge about agentic AI. The research defines AAI as AI that works on four properties: Adaptivity, which means adjust to change in real time, proactiveness which means acts before problems happen, Autonomy, which means act independently without human prompts, and Decision Agency which means as complex trade-off decision by itself, unlike traditional AI that follow fixed rules, or generative ai that just responds to prompts while agentic air pursues goal on its own. The Paper also gives an AMO framework that links what enables AAI, how it works, and the results/outcomes it produces for the organization.
Personal Insights:
What stuck me most in paper A was the challenge of accountability. The paper says that when AAI makes a decision independently, like approving a loan or recommending a medical treatment, and something goes wrong, nobody knows who is responsible for this: is it the developer, manager, or the company? This thing is directly connected to my rescue robot assignment. If my robot chooses to rescue survivor A and leaves survivor B, and survivor B dies because of that decision, then who is responsible? Is it me or the organization? This question has no easy answer, and papers confirm this is the biggest unsolved problem in agentic AI today. This single realization changes how I think about AI design completely.
Notebook Vs Manual Understanding:
Before notebookLm:
When I read paper A manually, I understood the 4 properties and the challenges clearly, but I was confused about one thing. I could not figure out how AAI was actually different from just a very advanced goal-based agent from my course textbook; both seemed to pursue goals. Both made a decision, so what was the real difference?
After Notebooklm:
When I asked NotebookLM this question, it explained that the key difference is persistent memory and multi-objective control; a goal-based agent from my course optimizes one single goal at a time. But AAI balances multiple competing goals simultaneously while remembering context across time. This distinction completely changed my understanding. My manual reading gave me the WHAT. NotebookLM gave me the WHY. That difference is very important.
Paper B Summary:
The second paper, A Survey of LLM-based Deep Search Agents by Xi et al. 2025, surveys how AI agents search the internet the way a researcher would, not the way Google does. Instead of returning just links, a search agent understands your goal, then follows a loop: Plan, then Act by searching, then Observe results, then reflect on if it has enough information, then update the plan and search again. It keeps doing this loop until it has a complete answer. The paper identifies three search structures called Parallel, which searches many queries at once, Sequential, which searches one step at a time, where each step informs the next, and Hybrid, which combines both approaches.
Course Connection:
Reading both papers together revealed something I had not expected. My Q1 rescue robot assignment was actually an Agentic AI Deep Search problem wearing a different mask.
Paper A connections to my robot:
Autonomy connects to my robot navigating the flood zone completely alone without human commands, Adaptivity connects to my robot replanning when flood paths change, Proactiveness connects to my robot returning to base before battery runs out without being told, and Decision agency connects to my robot using the utility function in Part C to choose which survivor to rescue first when two survivors are found at the same time.
Paper B connections to my search algorithms:
The Sequential search structure from Paper B is very similar to my A* algorithm from Part B of Q1. A* evaluates each node using f = g + h and decides the next step based on current information. The search agent also evaluates its current evidence and decides the next search step based on what it found. Both iterate step by step and stop when the goal is reached. The Hybrid tree structure from Paper B is similar to IDS, which also increases search depth systematically. Paper A defines WHAT my robot is, which is an Agentic AI. Paper B defines HOW my robot searches, which is through a sequential loop like A*. Together, they describe my complete AI rescue system.
** Conclusion:**
These two papers taught me that Agentic AI is not a distant future technology; it is the framework behind the rescue robot I already designed for my AI course assignment. Paper A gave me the theory. Paper B gave me the technical implementation. If deployed responsibly with clear governance and accountability systems, Agentic AI systems like my rescue robot have real potential to save lives in disaster scenarios like the 2022 Pakistan floods.
"cc: @raqeeb_26
Top comments (1)
Nice work @musif . You have explained it very well. I hope you have got the idea how agents work in real environments and by considering the right algorithms and techniques we can build something extraordinary.