DEV Community

24P-0694 Yasir Ali
24P-0694 Yasir Ali

Posted on

From Textbooks to Real Labs: How Agentic AI & Adaptive A* Are Reshaping Intelligent Systems

πŸ‘‹ Introduction
Hi! I'm Yasir Ali, a CS student at FAST-NUCES. This blog is part of my AI course assignment where I analyzed two cutting-edge research papers and connected them to core course concepts β€” Search Algorithms, Intelligent Agents, and CSPs.

The two papers I chose:
β€’Paper 1: "The Rise of Agentic AI: A Review of Definitions, Frameworks, and Challenges" (2025)
β€’Paper 2: "Research on the A* Algorithm Based on Adaptive Weights" (2025)

I used Google NotebookLM alongside manual reading β€” a workflow I now recommend to every AI student. Here's what I found.

πŸ“„ Paper 1: The Rise of Agentic AI (2025)
What Is It About?
This paper surveys over 80 definitions of 'AI agents' and proposes a unified framework for Agentic AI β€” systems that autonomously plan, act, and adapt to achieve long-horizon goals. The authors classify agents along four dimensions: Autonomy, Reactivity, Proactiveness, and Social Ability.

πŸ”— Course Connection
In our AI course, we classify agents from Simple Reflex β†’ Model-Based β†’ Goal-Based β†’ Utility-Based. This paper validates that classification and extends it: modern LLM-based agents (like GPT-4 with tools) are utility-based learning agents β€” just with richer action spaces (web search, code execution, memory).

The paper's discussion of multi-agent coordination directly mirrors what we study in environment classification: cooperative vs. competitive, single vs. multi-agent. The coordination challenges they describe β€” deadlocks, redundant actions β€” are the same problems we analyze in class.

🧠 Manual Reading vs. NotebookLM
Manual reading gave me the framework β€” the four dimensions, the taxonomy. But when I asked NotebookLM: "How does this relate to the BDI model?", it surfaced something I missed completely: modern LLM agents simulate Belief-Desire-Intention without explicitly programming it. Beliefs = context window. Desires = system prompt. Intentions = chain-of-thought reasoning. That insight alone was worth the exercise.

πŸ“„ Paper 2: Adaptive Weights in A* (2025)
What Is It About?
Standard A* uses: f(n) = g(n) + h(n). This paper proposes Adaptive Weighted A* (AWA*): f(n) = g(n) + w(n) x h(n), where w(n) changes dynamically based on battery level, obstacle density, and distance to goal. High battery = higher w = faster/greedier. Low battery = lower w = more conservative/optimal.

πŸ”— Course Connection
In class we learned: A* is optimal when h is admissible, Greedy Best-First is fast but suboptimal, UCS is optimal but slow. AWA* creates a dynamic dial between these three. When w=1 it's classic A*. When w>1 it approaches Best-First. When w drops, it self-corrects toward UCS. It's not just a performance trick β€” it's encoding rational decision-making under resource constraints.

Results: AWA* finds paths 23% faster than standard A* on dynamic grids while staying within 5% of optimal cost. For time-critical disaster rescue, this tradeoff is exactly what's needed.

🧠 Manual Reading vs. NotebookLM
The weight update math (exponential decay formulas) was hard to parse manually. NotebookLM reframed it perfectly: the weight function encodes urgency. High urgency (low battery) = trust actual cost more. Low urgency = trust heuristic more. That reframing made the whole paper click.

🎯 Synthesis: How These Papers Connect
Concept Agentic AI Paper Adaptive A* Paper Our Course
Decision under constraint Trust-control tradeoff Adaptive weight w(n) Utility-based agents
Dynamic environments Real-time replanning Dynamic weight update Dynamic env classification
Search strategy Goal-based planning f(n) = g + wΒ·h A*, UCS, BFS comparison

πŸ’‘ Personal Reflection
Before this assignment, I thought A* was a solved problem. These papers showed me that foundational algorithms have active research frontiers. The adaptive weight idea is simple in hindsight but profound in impact.

More importantly: using NotebookLM alongside manual reading was a genuinely better workflow than either alone. Manual reading built intuition. NotebookLM surfaced cross-paper connections and answered 'why does this matter?' questions that papers rarely answer directly.

My advice to fellow AI students: Read the paper first. Then use tools to go deeper. Never the other way around.

πŸ“š References
β€’"The Rise of Agentic AI: A Review of Definitions, Frameworks, and Challenges" β€” arXiv 2025
β€’"Research on the A* Algorithm Based on Adaptive Weights" β€” 2025
β€’Russell & Norvig β€” Artificial Intelligence: A Modern Approach, 4th Edition
β€’Google NotebookLM β€” used for paper analysis

Written by Yasir Ali | Roll No: [24P-0694] | FAST-NUCES | AI Course
Tagging: @raqeebr (Hashnode) | @raqeeb_26 (Dev.to)

Top comments (1)

Collapse
 
raqeeb_26 profile image
Raqeeb

Good start. But the " How These papers connect " part is not clear. And where is the video link?