Introduction:
In the vast landscape of artificial intelligence, heuristic search algorithms are a beacon of efficiency and optimization. These algorithms, which cleverly estimate states' proximity to goals, are invaluable in domains ranging from video games to robotics.
This blog post will delve into the mechanics of heuristic-driven searches, focusing on key algorithms like Greedy Search and A*. We'll explore how heuristics guide these algorithms toward their objectives, the challenges they face, and how they excel in various applications.
Whether you're a software engineer, researcher, or just curious about AI, this exploration will offer valuable insights into the art and science of informed search.
What are Heuristics?
A heuristic is a function that estimates how close a state is to a goal. Each heuristic is designed for a particular search problem. Some heuristic examples for pathfinding are the Manhattan distance and Euclidean distance. A heuristic function (h(x)) takes the state as input and outputs the associated heuristic value.
Admissible Heuristics:
Inadmissible (pessimistic) heuristics break optimality by trapping good plans on the fringe. Admissible (optimistic) heuristics slow down bad plans but never outweigh actual costs.
A heuristic β is admissible (optimistic) if 0β€β(π)β€ββ(π) where ββ(π) is the actual cost to the nearest goal. Most of whatβs involved in using A* in practice is coming up with admissible heuristics.
Greedy Search:
In greedy search, the node closest to the goal is expanded first. A heuristic estimates the distance to the nearest goal for each state. However, sometimes, this approach leads the algorithm to the wrong goal. In the worst case, this algorithm performs like a poorly guided DFS.
A* Search:
The A* algorithm is like a combination of USC and greedy search. Uniform-cost orders by path cost or backward cost π(π). Greedy orders by goal proximity or forward cost β(π). A* search orders by the sum of both π(π)=π(π)+β(π).
An issue that can arise using the A* algorithm is that taking a suboptimal path is possible if the heuristic estimation is significantly larger than the actual path cost. For example, the optimal path might be SAG with a cost of 4, but if the heuristic estimates AG with a cost of 6, the algorithm could instead take SG.
Optimality of A* Tree Search:
Assume A is an optimal goal node, B is a suboptimal goal node, and h is admissible. The claim is that A will exit the fringe before B, but does this happen?
Imagine B is on the fringe, and some ancestor π of A is on the fringe, too. If π is expanded before B, then A will exit the fringe before B. But π will only be expanded before B if π(π)β€π(π΄)<π(π΅).
Properties of A*:
Uniform-cost search expands equally in all directions. In contrast, A* expands mainly toward the goal while hedging its bets to ensure optimality. Due to its speed and optimality, A* is used in many applications, such as video games, pathing/routing problems, resource planning problems, robot motion planning, language analysis, machine translation, speech recognition, and more.
Creating Admissible Heuristics:
Most of the work in solving complex search problems optimally involves using admissible heuristics. Admissible heuristics are often solutions to relaxed problems where new actions are available. Inadmissible heuristics can also be helpful, however.
Consider the 8-puzzle problem. There are plenty of states and actions in this problem, but only a single start and goal state. One heuristic that could be used is the number of tiles misplaced. In the image to the right, (h(start) = 8) and (h(end) = 0). This is a relaxed problem heuristic.
With A*, a trade-off between the quality of the estimate and the work per node needs to be made. As heuristics get closer to the actual cost, fewer nodes will be expanded, but more work per node will be required to compute the heuristic.
Semi-lattice of Heuristics:
When considering multiple heuristics, some may be closer to the optimal in some scenarios but further in others. In such a scenario, you can combine the heuristics to get the best of both worlds.
When multiple heuristics are combined, heuristic βπ is said to dominate heuristic βπ if βπ:βπ(π)β₯βπ(π).
Heuristics form a semi-lattice where the max of admissible heuristics is admissible. β(π)=max(βπ(π),βπ(π))
Trivial heuristics are at the bottom of the lattice, called the zero heuristics. The top of the lattice is called the exact heuristic.
Graph Search:
When using tree search, the failure to detect repeated states can cause exponentially more work. For example, in the image to the right, thereβs no reason to expand the circled nodes when using BFS because they will have already been searched previously.
Graph search has the idea never to expand a state twice. To implement graph search, combine tree search with a set of expanded states (βclosed setβ), expand the search tree node-by-node, and check before expanding each node to ensure its state has never been developed. If itβs not new, skip it in the search process; if it is new, add it to the closed set and add its children to the fringe.
Itβs essential to store the closed set as a set instead of a list.
Graph search can improve efficiency but can also ruin optimality. If two nodes reach the same node, for example, but the suboptimal one is expanded first, the search algorithm will find the goal sub-optimally.
Consistency Of Heuristics:
An important property to consider is the consistency of heuristics. The main idea is that the estimated heuristic cost is less than or equal to the actual cost. This is different from admissibility, as instead of calculating the cost to the goal, the cost between each node is estimated (β(π΄)ββ(πΆ)β€πππ π‘(π΄βπΆ)).
The consequences of consistency are that the π value along a path never decreases (β(π΄)β€πππ π‘(π΄βπΆ)+β(πΆ)) and A* graph search is optimal.
When a consistent heuristic is used, nodes expand in tree search A* to increase the total f value (creating f-contours). Also, for every state π , nodes that reach π optimally are expanded before nodes that reach π sub-optimally. The overall result is that A* graph search is optimal.'
Note that consistency implies admissibility. Most natural admissible heuristics tend to be consistent, especially from relaxed problems.
Conclusion:
Heuristic search algorithms are vital tools in the AI toolkit, offering speed and precision. By leveraging heuristics, algorithms like Greedy Search and A* can efficiently navigate complex problems. However, as we've seen, choosing or designing a suitable heuristic is crucial for achieving optimal results. Informed search algorithms aren't just theoretical constructsβthey profoundly impact our daily lives, from enhancing the realism of video games to optimizing logistics and beyond. By understanding these algorithms and their nuances, we can appreciate the power of AI and its potential to solve real-world challenges.
Top comments (0)