A* algorithm Based on Adaptive Weights
A* algorithm uses heuristics + total cost up to a node n. Now in a normal graph the algorithm works really well but when I come to grid based search space it has some flaws. In a grid based search space our graph is basically the grid of nxn cells. These cells are nodes. Also in a grid moving from one cell to another has a constant cost throughout the whole grid. Now lets see what are some flaws/problems with traditional A* algorithm when it comes to grid based search space.
Problem1: When we run traditional A* algorithm on a nxn grid we normally use the concept of 8 neighbors which means we consider all of the neighbors at each iteration. This takes a lot of computational power by exploring unnecessary nodes. The solution to this problem proposed by researchers is that we only use 5 most useful neighbors. Now the question is how do we get these 5 important neighbors. Well there is a simple method:
Step1: Take the below formula and find the values of Δx and Δy:
Step2: Compare the these two values with below given conditions:
Step3: Finally what ever condition is satisfied just get the corresponding set which is representing neighbors and then check the which neighbors to consider according to that set following the below given format (NOTE:Blue is representing Agent in this grid):
Problem2: The other problem important to be tackled is Hazardous Paths. These are paths that cross the edge of an obstacle directly. Just like shown in the figure
We can see that one agent can collide with the sharp edge. This is the problem that occurs when traditional A* algorithm is used in grid based search space. To solve this problem we have a scheme which says that we should remove two neighbors from our neighbors set based on the direction of the obstacle. The following table shows the scheme:
The algorithm will check the direction of obstacle and then decide which child nodes need to be not considered.
Personal thoughts:
I have been learning about A* and other algorithms in class recently which I find really interesting but after reading this paper I really got the idea of how to practically implement this algorithm in a real life scenario.
Reference paper:
A* algorithm in Based on Adaptive Weights
Large Language Model in search engines
Nowadays we have very autonomous search engines making our day to day life more productive. But search engines were not always like this in the past,they had a pre-programmed way of searching different websites related to our given input and used some classical concepts of Natural Language Processing to understand human text. The problem on those search engines were that we could not search for a complex sentence like “How to know if and A* algorithm has given optimal path should we compare it with ucs traversal?” Here the sentence is itself really complex with having some confusing words like “and” should not be there in the sentence but the user mistakenly types it and the classic engine wont have a good response ,other than that it will get ambiguous about the “ucs” term. To solve this problem we use LLMs(like chatgpt,claud,opus) and integrate web technology with it. The reason for this integration is that LLM itself is sufficient to understand human text through the latest NLP concepts but its does not have the latest information. That is why we combining the power of Web and AI to create advance search engine. Lets understand the evolution of search engines by understanding each of generations:
Generation 1: In this generation we had traditional search engines using core logic for searching. It matches key words with all the available websites on the internet and then shows the top 10 best results.
Generation 2: This was the first time AI entered the world of information search. RAG(Retrieval Augmented Generation) was first introduced in this generation which is still in use today.
Generation 3:Now here comes the latest and most powerful method up till now that is THE DEEP SEARCH. In this method instead of following a fix step by step the agent would be left for independent thinking. The agent thinks about what to search and what is the best content to give as output after understanding user complex text. Machine learning is also used in this cutting edge technology.
Personal thoughts:
Before reading this paper I didn't had any idea of the history of Search engines. I got to learn a lot about how search engines work and how it has evolved through decades.
Reference paper:
A Survey on AI Search with Large Language Models
@raqeeb_26(dev.to)





Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.