DEV Community

M Rizwan Akbar
M Rizwan Akbar

Posted on

Reasearch

Asslam o alaikum everyone. My name is Muaz and I am studing BS Computer Science from FAST National University Faisalabad. This blog is basically part of my AI course assignment which is given by Dr. Bilal Jan sir. We have to read research papers and write about them. Honestly speaking when sir first told us this I was not very happy because I thought research papers are very boring and difficult thing. But when I actually open and read them I was quite suprised. So I am sharing what I learnt from this experiance.
Why I Read Research Papers
Okay so first thing — when sir said read research papers I was like yaar ye to bohot mushkil kaam hai. I always think research papers are only for phd students not for people like us who are just in bachelors. But then I open the papers and start reading and I notice something very intresting. I keep seeing things from our AI class inside these very new and modern papers. Like A star search algorithm which we study in our course was mention in a paper about ChatGPT type systems. That really suprised me and I think okay maybe these papers are actually worth reading.
So my suggestion to every CS student is that please try to read atleast one or two papers every semester. It really help you understand where AI is going in real world not just what is written in textbook from many years ago.
Paper 1 — The Rise of Agentic AI (2025)
Title of paper: "The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications and Challenges"
Where published: Future Internet journal by MDPI in September 2025. These people reviewed 143 different research studies to write this one paper. That is alot of work.
What This Paper is Actually About
Okay so this paper basically try to answer one simple question — what is agentic AI and why suddenly everyone is talking about it. The word "agentic AI" was barely exist before 2024 and then suddenly in 2025 everyone start using it everywhere. Paper say that more than 90 percent of all papers on this topic were published in only 2024 and 2025. That show how fast this whole field is moving.
So what is agentic AI in simple words? Normal AI like chatbot just answer one question at a time. You ask something, it reply, finish. But agentic AI is quite different from this. It can set its own goals, plan many steps ahead, use different tools, remember what it did before, and keep working until whole task is completly done.
Think about difference between asking someone "what is weather today" versus saying "book me cheapest ticket to Karachi next friday and also send email to my boss that I will not come to office." The second one need planning and multiple steps and different tools. That is exactly what agentic AI can do by itself without any human help.
Four Things Every Agentic System Must Have
Paper identify four core things that every agentic AI system need to actually work properly:

  1. Planning — means break big goal into small small steps and decide what to do next. In our Q1 rescue robot assignment this is like robot deciding which survivor to go first based on battery level and distance from current position.
  2. Memory — means remember what you did before so you dont repeat same mistake again. Like robot remembering which zones it already search so it dont go back to same place again and waste battery.
  3. Reflection — means check your own performance and adjust if something is going wrong. Like robot realizing that its original path is now flooded and it need to make completely new plan in middle of mission.
  4. Goal Pursuit — means keep working toward objective without human telling every single step what to do. Like robot navigating whole flood zone completely by itself to find all survivors. Frameworks Paper Talk About This paper review many real frameworks which developers actually use to build agentic systems today. Some names you might already heard before — LangChain, AutoGen by Microsoft, MetaGPT and CrewAI. All of them basically implement the four things above but in their own different ways and styles. Paper also discuss something called ReAct framework. ReAct stand for Reason plus Act. It is basically an agent that first think about what to do, then actually do it, then think again based on what it observe after doing, then do again. This loop keep going until task is finish. When I read about ReAct I immediately think — yaar this is same as perception action loop we learn in our AI class! Exact same concept just much more powerful implementation using modern AI models. Most Intresting Finding — Compounding Errors This was honestly the most intresting thing I found in whole paper and I think about it alot after reading. Paper talk about something called compounding errors or error propagation. What this means is that in agentic systems a small mistake in early step dont stay small — it keep growing bigger and bigger in every later step. For example if agent make wrong assumption in step number 2 then by step number 8 that wrong assumption has effect every single decision that come in between. Final output can be completly wrong even though each individual step look okay by itself. Paper say this is one of the biggest unsolved problems in agentic AI right now and honestly it is quite scary when you think about what this mean for real applications. I find this very relatable because in our Q1 assignment if rescue robot choose wrong path at beginning it waste battery on every single step that come after that. Then it might not have enough battery left to reach the most important survivors who need help. Same concept just different scale of application. Paper 2 — A Survey of LLM based Deep Search Agents (2025) Title of paper: "A Survey of LLM-based Deep Search Agents: Paradigm, Optimization, Evaluation and Challenges" Where published: arXiv in August 2025. This is actually first ever proper survey done on this specific topic according to the authors. What is Deep Search Agent We all use Google everyday right? But Google search is actually quite simple thing if you think about it properly. You type some keywords, it find documents with those keywords, it rank them and show you links. That is it basically. It dont really understand what you actually want to find, it dont reason about anything at all, it just match keywords and show results. Deep Search Agents are completly different thing from this. These are AI systems that actually understand what you want to find. They plan a proper strategy for searching before they start. They search multiple times in multiple different places. They read what they find and reason about it carefully. And then they combine everything into one proper complete answer for you. Best real example that paper give is OpenAI Deep Research feature. When you ask it some complex question it spend several minutes searching many sources, reading them properly, connecting information from different places and then writing full structured report for you. That is a search agent working in real life right now. Three Generations of Search — Here Course Connection Come! Okay this is the part I find most exciting in whole paper because connection to our AI course is so clear here. I was genuinely suprised when I first see this connection. Generation 1 — Old Search like Google: Match keywords, rank documents, show links. This is like uninformed search we study in class — like BFS with no knowledge at all, just explore everything blindly without any intelligent guidance. Generation 2 — RAG Systems: Retrieve some documents then give them to AI to generate answer from them. Little bit better than old search but still no real planning about what to actually search for next. Generation 3 — Agentic Search like Deep Research: Plan, search, reason, plan again, search again, combine everything and give proper answer. This is exactly like A star search we study in our AI class! It use intelligence as heuristic to guide where to search next — f(n) = g(n) + h(n). The language model itself IS the heuristic function in this case. When I realize this connection I actually got quite excite about it. We study A star algorithm in class and honestly it feel like just another boring textbook topic that we have to memorize. But then I see same core idea — use heuristic to intelligently guide search instead of blindly exploring everything — appearing in paper about most advance AI search systems in 2025. That was genuinely cool moment for me personally. Most Suprising Finding — Lost in the Middle Problem I never expect to find something this suprising in a research paper but here it is and it really change how I think about AI. There is a well known problem called lost in the middle problem. What this mean is that when you give language model a very long document to read, it pay much more attention to information at beginning and end of document. The information which is place in middle of document get much less attention from the model. So if you retrieve 20 documents and put them all together for AI to read at once, documents number 8 to 14 get much less attention than documents 1 to 3 and 17 to 20. This mean how you arrange information matter as much as what information you actually retrieve in first place. I never think that something simple as position of text inside document can affect AI performance so much. This was genuinly suprising discovery for me. How Both Papers Connect to Our Course This is my favourite section to write because connections I found are genuinely suprising to me. These are not just surface level connections — they are deep structural similarities between classical AI and modern research. Agent Types Connection: Paper 1 is literally review of how agent architectures have evolve over time. Every single framework they review is different implementation of agent types we study in class — simple reflex, model-based, goal-based, utility-based. All same concepts just made more powerful with modern technology. A star Search Connection: ReAct framework use reasoning as heuristic to decide next action — same f(n) = g(n) + h(n) structure as A star search. In Paper 2 the LLM itself act as h(n) — intelligent estimator of how useful each search direction will be. Whole process become informed search instead of blind search. CSP Connection: MetaGPT framework decompose complex task into sub-tasks for specialised agents — exactly like CSP decomposition we study in course. In Paper 2 query decomposition break complex questions into sub-questions. Direct application of same concept from our textbook. Multi-Agent Environment: Paper 1 have whole section on multi-agent systems where agents communicate and coordinate with each other — map directly to multi-agent dimension we classify in our Q1 rescue robot assignment. My Experiance with Google NotebookLM Part of this assignment was to use Google NotebookLM. Honestly I was little skeptical at start. I think it will just summarize papers and I wont really learn anything new or different from it. But I was completely wrong about this. Manual Reading: When I first read papers without any help it was quite difficult yaar. Technical terminology was hard to understand specially for someone like me who dont read research papers regularly. Parts comparing many different frameworks in Paper 1 were specially confusing — I got confuse about how LangChain and AutoGen and MetaGPT all relate to each other exactly. I have to re-read same sections many times. After Using NotebookLM: Experience was quite different after this. I use the question answer feature to ask specific things I dont understand properly. Like I ask it "what is difference between ReAct and Chain of Thought" and it pull exact relevant sections from paper to explain in simple way. The audio overview feature was specially good — it create podcast style summary of paper which is very easy to listen while doing other things. Most importantly through NotebookLM I discover the lost in middle problem in Paper 2 which I had completly miss during my manual reading. So NotebookLM actually help me find something important that I miss completely by myself. That was good lesson for me. My honest recomendation to everyone — read paper yourself first to form your own understanding of it, then use NotebookLM to fill the gaps and verify your thinking. Using it without reading first is not as beneficial because you dont have base knowledge to ask good questions. My Video I also make a short 2 to 3 minute video where I explain core ideas of both papers and share what I find most intresting about them. Link is below!

What I Learnt Overall
Before doing this assignment I genuinly think research papers are not for undergraduate students like us in bachelors program. That was completely wrong thinking I had. These papers are actually very readable if you give proper time and use right tools like NotebookLM to help you understand difficult parts.
Most important thing I learnt from all this is that classical AI we study in university — A star search, agent types, CSP — is not outdated at all. It is literally the foundation of most advance AI systems being build right now in 2025. Modern AI is just more powerful version of same concepts we already learn in our class. That is honestly quite motivating thing to realize as a student.
Thank you so much for reading this blog post. If you are also CS student and found this helpful please leave a comment below. Watch my YouTube video above for quick explanation of same content!

Top comments (0)