DEV Community

Arif Ali
Arif Ali

Posted on

From Classroom to Cutting-Edge Research: Agentic AI & LLM Search Agents Explained

From Classroom to Cutting-Edge Research: Agentic AI & LLM Search Agents Explained

By [Your Name] — BS-CS Student, FAST University Published: March 2026 | 5 min read

Introduction

When I first started studying Artificial Intelligence at FAST University under Dr. Bilal Jan, concepts like search algorithms, agent types, and constraint satisfaction problems felt abstract. Then I read two research papers that completely changed how I see these topics. Suddenly, everything we study in class is not just theory — it is powering real, cutting-edge AI systems being built right now in 2025 and 2026.

In this blog, I will break down two papers I analyzed, explain their core ideas in simple language, and show you exactly how they connect to what we learn in our AI course.

Paper 1: "The Rise of Agentic AI" (2025)

What is the paper about?

This paper reviews the emerging field of Agentic AI — a major shift in how AI systems work. Traditional AI answers one question at a time. Agentic AI is different. It can plan, take actions, use tools, and complete multi-step goals — all on its own.

Think about it this way. A basic AI is like a calculator. You give it input, it gives you output. An Agentic AI is more like an employee. You give it a goal, and it figures out the steps, uses tools, makes decisions, and gets it done.

Key ideas from the paper:

The paper defines three core properties of Agentic AI systems:

  1. Autonomy — The agent acts without constant human guidance. It decides what to do next based on its current state and goal.

  2. Tool Use — Modern agents do not just generate text. They can search the web, write and run code, read files, send emails, and interact with APIs.

  3. Multi-Agent Collaboration — Multiple AI agents can work together, each specializing in a different task, coordinating like a team.

How does this connect to our AI course?

This is where it gets exciting. In class, we study agent types:

Simple Reflex Agent

Model-Based Reflex Agent

Goal-Based Agent

Utility-Based Agent

Learning Agent

The paper maps directly to this! Agentic AI systems are essentially Goal-Based and Utility-Based agents scaled up with language models. The "planning" that Agentic AI does is exactly what a Goal-Based agent does — it looks at the current state, defines what actions are possible, and selects the path toward the goal.

The multi-agent collaboration the paper describes is exactly the Multi-Agent environment we classify in our environment dimensions table. When I was doing Part A of Assignment 1 — classifying the GB flood rescue robot environment — I realized the robot operates in a Multi-Agent environment because other robots share its path. That is Agentic AI in a disaster zone.

What I found most interesting

What struck me most was the section on challenges. The paper is honest that Agentic AI systems still struggle with:

Hallucination during multi-step planning

Getting stuck in loops

Not knowing when to stop and ask a human

These are basically the same problems we discuss in class when comparing complete vs incomplete search algorithms. An agent that loops forever is like DFS going down an infinite path.

Paper 2: "A Survey of LLM-based Deep Search Agents" (2026)

What is the paper about?

This is one of the most relevant papers I have ever read as an AI student. It surveys how Large Language Models are now being used as intelligent search agents — not just to answer questions, but to search deeply and iteratively to find answers to complex questions.

A normal search engine gives you ten blue links. An LLM-based Deep Search Agent does something far more powerful. It:

Breaks your complex question into smaller sub-questions

Searches for each sub-question separately

Reads and reasons over the results

Generates follow-up searches based on what it learned

Synthesizes everything into one final, comprehensive answer

How does this connect to our AI course?

This is the most direct connection to our course content I have ever seen in a real paper.

Connection to Search Algorithms:

The way these LLM search agents work is almost identical to Iterative Deepening Search (IDDFS) — one of the algorithms we compare in class. Just like IDDFS starts at depth 1 and goes deeper with each iteration, the LLM search agent starts with a simple search, then goes deeper based on what it finds.

The paper also describes how the agent uses a heuristic to decide which sub-questions are most worth exploring — exactly like A Search* uses a heuristic h(n) to prioritize which nodes to expand.

Connection to Agent Types:

The search agent described in this paper is a perfect example of a Goal-Based Agent. It has:

A clear goal (answer the user's complex question)

Knowledge of current state (what has been found so far)

Actions (search, read, reason, synthesize)

A plan (the order of sub-searches)

Connection to CSPs:

Interestingly, some of the search agents in the paper use constraint-like logic to decide when to stop searching. They have constraints like "search budget = 10 queries" and "confidence threshold = 80%." This is very similar to the battery and risk constraints in the CSP formulation we did in Assignment 1 Part C.

What I found most interesting

The paper describes a concept called "search reflection" — where the agent evaluates its own search results and decides if it needs to search differently. This is like Simulated Annealing in our course. Just like SA accepts a worse solution temporarily to escape a local optimum, the search agent sometimes abandons a promising search path and tries a completely different approach to find a better answer.

My NotebookLM Experience

For this assignment, I used Google NotebookLM to help me understand both papers more deeply. Here is what I found:

When I read the papers manually first, I understood the surface-level ideas. But when I uploaded them to NotebookLM and started asking questions, I discovered connections I had completely missed.

For example, I asked NotebookLM: "How does the search reflection in Paper 2 relate to local optima problems?" — and the response made me realize that both papers are really about the same fundamental AI challenge: how do intelligent systems avoid getting stuck?

Agentic AI avoids getting stuck by having multiple agents with different specializations. LLM Search Agents avoid getting stuck by reflecting and re-searching. Simulated Annealing avoids getting stuck by accepting bad moves temporarily. They are all solving the same problem at different scales.

This was my biggest personal insight from reading and using NotebookLM together.

Summary: Course Connections at a Glance

Paper Concept

Our Course Topic

Agentic AI planning

Goal-Based Agents

Multi-agent coordination

Multi-Agent Environments

Iterative sub-question search

Iterative Deepening Search

Heuristic-guided search priority

A* Search Algorithm

Search budget constraints

CSP Constraints

Search reflection & re-routing

Simulated Annealing

Partial observability handling

Model-Based Reflex Agent

Conclusion

These two papers taught me that everything we study in our AI course is not just textbook theory. It is the foundation of systems being built and deployed right now. The next time you implement A* or model a CSP, remember that the same ideas are inside the most powerful AI agents in the world today.

If you are a CS student, I strongly recommend reading both papers. Start with "The Rise of Agentic AI" for the big picture, then read the LLM Search Agents survey to see how search algorithms come alive inside language models.

And use NotebookLM — it genuinely changes how you understand research papers.

Thanks for reading! This blog was written as part of my AI course assignment at FAST University under Dr. Bilal Jan. Feel free to connect with me on Hashnode and Dev.to.

Tags: #AgenticAI #ArtificialIntelligence #SearchAlgorithms #LLM #CSP #MachineLearning #FAST

📌 Video: Watch my 2-minute explanation of these papers here → https://youtu.be/vRHSxWp8r3U 📌 NotebookLM: https://notebooklm.google.com/notebook/87cda8ec-4139-4453-b4f0-9d50748438e9

Top comments (0)