DEV Community

Muhammad Hashir
Muhammad Hashir

Posted on

From Search Algorithms to Intelligent Agents: What I Learned from Two AI Research Papers

Hello My name is Muhammad Hashir and I am a student of Computer Science in NUCES FAST. In this blog/article I have briefly explained about what i have learned in the two recent research papers related to Artificial Intelligence. The purpose of this blog is to understand how concepts we learn in AI courses are actually used in real research and modern intelligent systems.

The two papers I read are:

• “A Review of Definitions, Frameworks, and Challenges: The Rise of Agentic AI”

• “Research on the A* Algorithm with Adaptive Weights”

Both deal with topics that come up constantly in AI courses — search algorithms and intelligent agents — but they approach them from a practical, research-oriented angle that textbooks rarely do.

The Rise of Agentic AI:
 The first paper tackles something I’d heard thrown around a lot lately agentic AI. The basic idea isn’t complicated — instead of an AI that just responds to a single prompt and stops, an agentic system keeps going. It plans, makes decisions, takes actions, and adjusts based on what it finds along the way. It’s less like a calculator and more like a junior colleague you’ve handed a project to.

                                                                  The paper breaks down what these systems actually look like under the hood. Most of them share four core components:

Perception — gathering and making sense of information from the environment

• *Planning *— figuring out what steps need to happen and in what order

• Memory — storing useful information to draw on later in the task

• Execution — actually doing the thing, whether that means writing code, fetching data, or sending a message

In practice, most of these systems are built by wrapping a large language model with planning logic and external tools. So, if you ask an agent to “research this topic and write me a report,” it doesn’t just generate text — it searches the web, filters what’s relevant, organizes its findings, and produces a structured output. That’s a genuinely different kind of behaviour from what most people imagine when they think of AI. What the paper doesn’t shy away from is the fact that this is still hard to get right. Keeping the system safe and predictable, making sure it doesn’t go off-script in weird ways, handling long tasks without losing context — these are real, unsolved problems. Reading this section made me appreciate why AI safety research is such a big deal right now. It’s not just philosophical hand-wringing; it’s very practical.

                     What stuck with me most is the trajectory. The paper frames agentic AI not as a finished product but as a direction — systems gradually becoming more autonomous, more capable of handling multi-step tasks without constant human input. That shift feels significant, and it’s happening faster than I expected.

 

Improving the A* Search Algorithm with Adaptive Weights:

                          The second paper is more technical, but in a satisfying way. It focuses on A*, which is one of those algorithms I’d learned in class mostly as a neat theoretical idea. You have a graph, a start point, a goal, and A* finds the shortest path by combining actual travel cost with a heuristic estimate of what’s left:

                    ** f(n) = g(n) + h(n)**
Enter fullscreen mode Exit fullscreen mode

                          Where g(n) is the cost from the start to the current node, and h(n) is the estimated cost from there to the goal. Simple enough in theory. The issue is that this formula treats the heuristic weight as fixed throughout the entire search, which isn’t always ideal.

            The paper proposes something called Adaptive Weighted A*, which adjusts the weight of h(n) as the search progresses. Early on, when you’re still exploring, you can afford to be a bit more flexible. Later, as you close in on the goal, you tighten things up and focus on finding the optimal path. The result is an algorithm that’s faster in complex environments without sacrificing too much in terms of solution quality.  The performance improvements shown in the paper were real and measurable — lower computational cost, quicker convergence, better handling of difficult map layouts. For applications like robot navigation or real-time pathfinding in games, that kind of improvement actually matters.

                                                                                                                                     What I found genuinely interesting here is the broader lesson: you don’t always need to invent something new to make meaningful progress. A* has been around for decades, and yet there’s still room to make it meaningfully better. That’s a useful reminder when it’s easy to assume all the interesting problems have already been solved.

 What I Actually Took Away from This:

                                      Reading both papers in the same sitting made a few things click that hadn’t before. The agentic AI paper is about building systems smart enough to handle complexity. The A* paper is about making sure the underlying problem-solving machinery is efficient enough to support those systems in real environments. They’re two sides of the same coin.

                           I also came away with a better sense of where AI research actually is right now. It’s not just about bigger models or flashier demos — there’s a lot of careful, incremental work happening at every level of the stack. Some of it is about rethinking what “intelligence” even means for a software system. Some of it is about shaving milliseconds off a pathfinding loop. Both kinds of work matter, and honestly, both are more interesting than I expected going in.

Final Thoughts:

               If you’re studying AI and haven’t spent much time with actual research papers yet, I’d encourage you to give it a shot. The concepts aren’t always harder than what’s in a textbook — they’re just framed differently, with a focus on what’s still open and unresolved rather than what’s already been figured out. That shift in framing changes how you think about the field.

                                 For me, these two papers reinforced something I keep coming back to classical AI techniques like search algorithms aren’t outdated. They’re the foundation that newer, more capable systems are built on. Understanding both the foundations and the frontier seems like the right way to approach this field as it keeps evolving.
Enter fullscreen mode Exit fullscreen mode

Author:
Muhammad Hashir

Acknowledgement:
Special thanks to Raqeeb (@raqeeb_26 – dev.to, @raqeebr – Hashnode). for guidance and feedback.

Top comments (0)