<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Muhammad Aqsam Qureshi</title>
    <description>The latest articles on DEV Community by Muhammad Aqsam Qureshi (@muaqqu).</description>
    <link>https://dev.to/muaqqu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/muaqqu"/>
    <language>en</language>
    <item>
      <title>From Classroom to Research: How 2025 Papers Are Revolutionizing A* Search and Agentic AI</title>
      <dc:creator>Muhammad Aqsam Qureshi</dc:creator>
      <pubDate>Sat, 14 Mar 2026 09:57:35 +0000</pubDate>
      <link>https://dev.to/muaqqu/from-classroom-to-research-how-2025-papers-are-revolutionizing-a-search-and-agentic-ai-1ad2</link>
      <guid>https://dev.to/muaqqu/from-classroom-to-research-how-2025-papers-are-revolutionizing-a-search-and-agentic-ai-1ad2</guid>
      <description>&lt;p&gt;Introduction&lt;/p&gt;

&lt;p&gt;Hi! I'm M Aqsam Qureshi, a student at FAST University. This blog post is for my Artificial Intelligence course assignment, where I'm reviewing two research papers from 2025.&lt;/p&gt;

&lt;p&gt;Paper 1: A* Algorithm with Adaptive Weights&lt;/p&gt;

&lt;p&gt;What is the Goal of This Paper?&lt;/p&gt;

&lt;p&gt;The traditional A* algorithm that we studied in class has three main problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Too Many Search Nodes - Wastes time and battery power&lt;/li&gt;
&lt;li&gt;Excessive Turning Points - Robots cannot follow zig-zag paths&lt;/li&gt;
&lt;li&gt;Local Optima Traps - Gets stuck in suboptimal routes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Goal of this paper: Solve these problems so robots can navigate efficiently.&lt;/p&gt;

&lt;p&gt;Four Improvements:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Diagonal-Free Search - Near obstacles, avoid diagonal moves for safety&lt;/li&gt;
&lt;li&gt;Adaptive Weights - Weight changes based on how many obstacles are nearby&lt;/li&gt;
&lt;li&gt;Heuristic Reward Values - Helps escape local optima (like Simulated Annealing)&lt;/li&gt;
&lt;li&gt;B-Spline Smoothing - Makes paths smooth so robots can follow easily&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search Nodes: 76.4% LESS&lt;/li&gt;
&lt;li&gt;Turn Angle: 71.7% LESS&lt;/li&gt;
&lt;li&gt;Computation Time: 10% FASTER&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Paper 2: The Rise of Agentic AI&lt;/p&gt;

&lt;p&gt;What is the Goal of This Paper?&lt;/p&gt;

&lt;p&gt;Agentic AI = AI that can think and act independently.&lt;/p&gt;

&lt;p&gt;Traditional AI vs Agentic AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Responds to commands vs Takes initiative&lt;/li&gt;
&lt;li&gt;Waits for input vs Perceives environment&lt;/li&gt;
&lt;li&gt;Handles single task vs Handles multiple goals&lt;/li&gt;
&lt;li&gt;No memory vs Learns from experience&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Five Patterns of Agentic AI:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tool Use - AI can use calculators, APIs, search engines&lt;/li&gt;
&lt;li&gt;Reflection - AI learns from its mistakes&lt;/li&gt;
&lt;li&gt;Re Act - Reason and Act in cycles (think then do)&lt;/li&gt;
&lt;li&gt;Planning - Break complex goals into smaller steps&lt;/li&gt;
&lt;li&gt;Multi-Agent - Multiple AI systems work together&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Seven Types of Agents:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Simple Reflex - Room thermostat&lt;/li&gt;
&lt;li&gt;Model-Based - Self-driving car&lt;/li&gt;
&lt;li&gt;Goal-Based - Rescue robot&lt;/li&gt;
&lt;li&gt;Utility-Based - Stock trading bot&lt;/li&gt;
&lt;li&gt;Learning Agent - Netflix recommendation system&lt;/li&gt;
&lt;li&gt;Hierarchical - Factory automation robot&lt;/li&gt;
&lt;li&gt;Multi-Agent - Swarm of drones&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Course Connection (VERY IMPORTANT)&lt;/p&gt;

&lt;p&gt;Connection to Part A: Rescue Robot Scenario&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Partially observable environment Agentic AI works in such environments&lt;/li&gt;
&lt;li&gt;Dynamic environment Yes&lt;/li&gt;
&lt;li&gt;Stochastic environment Yes&lt;/li&gt;
&lt;li&gt;Multi-agent environment  Yes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why we chose Utility-Based Agent? Paper 2 confirms that utility-based agents work best in complex environments like our flood scenario.&lt;/p&gt;

&lt;p&gt;Connection to Part B: Search Algorithms&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"A* searches too many nodes" → 76.4% reduction Yes&lt;/li&gt;
&lt;li&gt;"Paths have too many sharp turns" → 71.7% reduction Yes&lt;/li&gt;
&lt;li&gt;"Different terrains need different costs" → Adaptive weights based on obstacle density Yes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Connection to Part C: Simulated Annealing&lt;/p&gt;

&lt;p&gt;Remember this formula from Part C? P(accept) = e^(-ΔE / T)&lt;/p&gt;

&lt;p&gt;Paper 1 uses the SAME concept! When the algorithm gets stuck in a local optimum, it temporarily accepts "worse" paths to explore better options - exactly like Simulated Annealing!&lt;/p&gt;

&lt;p&gt;Connection to CSPs&lt;/p&gt;

&lt;p&gt;Paper 1's Grey Wolf Optimizer treats weight adjustment as an optimization problem - exactly like we formulated survivor prioritization in Part C.&lt;/p&gt;

&lt;p&gt;Personal Insight: Manual Reading vs NotebookLM&lt;/p&gt;

&lt;p&gt;Phase 1: Manual Reading (Confusion)&lt;/p&gt;

&lt;p&gt;When I first read these papers by myself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The mathematics of Grey Wolf Optimizer was difficult to understand&lt;/li&gt;
&lt;li&gt;Technical terms like "Re Act" and "Reflection" were confusing&lt;/li&gt;
&lt;li&gt;I felt overwhelmed by the complexity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Honest feeling: "Will I ever be able to understand research papers?"&lt;/p&gt;

&lt;p&gt;Phase 2: NotebookLM (Clarity)&lt;/p&gt;

&lt;p&gt;I uploaded both papers to Google NotebookLM (notebooklm.google.com) and asked:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Explain Grey Wolf Optimizer in simple terms with an example"&lt;/li&gt;
&lt;li&gt;"Create a table comparing all 7 agent types"&lt;/li&gt;
&lt;li&gt;"Summarize the five operational patterns in simple words"&lt;/li&gt;
&lt;li&gt;"Show me the results in a bullet list"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What NotebookLM gave me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple explanations with everyday examples&lt;/li&gt;
&lt;li&gt;Clean tables and easy comparisons&lt;/li&gt;
&lt;li&gt;Clear summaries of complex sections&lt;/li&gt;
&lt;li&gt;Citations back to the original paper&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example: NotebookLM explained Grey Wolf Optimizer as:&lt;br&gt;
"Imagine a pack of wolves hunting. The alpha wolf leads, beta and delta help, and omega follows. They search (explore), surround (exploit), and attack (converge). GWO mimics this behavior."&lt;/p&gt;

&lt;p&gt;So easy to understand!&lt;/p&gt;

&lt;p&gt;Phase 3: Back to the Paper (Understanding)&lt;/p&gt;

&lt;p&gt;With NotebookLM's explanations, I read the papers again. Everything became clear!&lt;/p&gt;

&lt;p&gt;Key Realization:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NotebookLM helped me understand 30% of the paper&lt;/li&gt;
&lt;li&gt;I understood the remaining 70% by reading myself&lt;/li&gt;
&lt;li&gt;AI didn't replace learning - it accelerated learning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What I Found Most Interesting&lt;/p&gt;

&lt;p&gt;From Paper 1: Grey Wolf Optimizer is inspired by wolf hunting behavior! Nature + algorithms = beautiful combination.&lt;/p&gt;

&lt;p&gt;From Paper 2: Self-improving AI agents that get better without human intervention - science fiction is becoming reality!&lt;/p&gt;

&lt;p&gt;Both Papers: They take what we learned in class and show how real research improves it. This makes me feel confident about my choice of field.&lt;/p&gt;

&lt;p&gt;Video Walkthrough&lt;/p&gt;

&lt;p&gt;I made video explaining these papers. Watch here:&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://youtu.be/794eacYVp_U" rel="noopener noreferrer"&gt;https://youtu.be/794eacYVp_U&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Comparison Table: Traditional vs Improved A*&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Heuristic Weight: Fixed vs Adaptive (Grey Wolf Optimizer)&lt;/li&gt;
&lt;li&gt;Movement: 8-direction vs 5-direction near obstacles&lt;/li&gt;
&lt;li&gt;Local Optima: Gets stuck vs Reward values for escape&lt;/li&gt;
&lt;li&gt;Path Smoothness: Sharp turns vs B-spline curves&lt;/li&gt;
&lt;li&gt;Search Nodes: High vs 76.4% less&lt;/li&gt;
&lt;li&gt;Turn Angle: High vs 71.7% less&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why This Matters for AI Students&lt;/p&gt;

&lt;p&gt;We often think that what we learn in class is the final truth. These papers prove:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Algorithms constantly improve - A* was created in 1968, still improving in 2025!&lt;/li&gt;
&lt;li&gt;Research builds on fundamentals - What we study is the foundation&lt;/li&gt;
&lt;li&gt;Interdisciplinary thinking works - Wolf hunting behavior inspired an algorithm!&lt;/li&gt;
&lt;li&gt;Course material is essential - Without basics, you cannot understand research&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Both papers showed me the depth of AI:&lt;/p&gt;

&lt;p&gt;Paper 1 showed that even classic algorithms like A* have room for 76% improvement when you think creatively.&lt;/p&gt;

&lt;p&gt;Paper 2 showed that agentic AI - which sounds like science fiction - is already here with clear patterns, types, and frameworks.&lt;/p&gt;

&lt;p&gt;Course connection proved that what we are learning isn't outdated theory - it's the foundation for cutting-edge research.&lt;/p&gt;

&lt;p&gt;About Me&lt;/p&gt;

&lt;p&gt;Name: M Aqsam Qureshi&lt;br&gt;
University: FAST University&lt;br&gt;
Course: Artificial Intelligence&lt;/p&gt;

&lt;p&gt;Follow me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hashnode:&lt;a href="https://hashnode.com/@aqsamqureshi" rel="noopener noreferrer"&gt;https://hashnode.com/@aqsamqureshi&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Dev.to: [&lt;a href="https://dev.to/muaqqu"&gt;https://dev.to/muaqqu&lt;/a&gt;]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tagging &lt;a class="mentioned-user" href="https://dev.to/raqeeb_26"&gt;@raqeeb_26&lt;/a&gt;(dev.to) and @raqeebr (Hashnode) as per assignment requirement.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>algorithms</category>
      <category>computerscience</category>
    </item>
  </channel>
</rss>
